SlideShare uma empresa Scribd logo
1 de 25
Baixar para ler offline
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
옥찬호
utilForever@gmail.com
Introduction
• Deep Q Network solve problems...
• High-dimensional observation spaces
• Discrete and low-dimensional action spaces
• Limitation of DQN
• For example, a 7 degree of freedom system with
the coarsest discretization 𝑎𝑖 ∈ −𝑘, 0, 𝑘 for each
joint leads to an action space with dimensionality:
37
= 2187.
→ How to solve high-dimensional action spaces problem?
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Introduction
• Use Deterministic Policy Gradient (DPG) algorithm!
• However, a naive application of this actor-critic method with
neural function approximators is unstable for challenging problems.
→ How?
• Combine actor-critic approach with insights from Deep Q Network (DQN)
• Use replay buffer to minimize correlations between samples.
• Use target Q network to give consistent targets.
• Use batch normalization.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Background
• The action-value function is used in many reinforcement
learning algorithms. It describes the expected return after
taking an action 𝑎 𝑡 in state 𝑠𝑡 and thereafter following policy 𝜋:
• Action-value function using Bellman-equation
• If target policy is deterministic,
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Background
• Q-learning, a commonly used off-policy algorithm, uses the
greedy policy 𝜇 𝑠 = argmax 𝑎 𝑄(𝑠, 𝑎). We consider function
approximators parameterized by 𝜃 𝑄
, which we optimize by
minimizing the loss:
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Algorithm
• It is not possible to straightforwardly apply Q-learning
to continuous action space.
• In continuous spaces finding the greedy policy requires
an optimization of 𝑎 𝑡 at every timestep.
𝜇 𝑠 = argmax 𝑎 𝑄(𝑠, 𝑎)
• This optimization is too slow to be practical with large,
unconstrained function approximators and nontrivial action spaces.
• Instead, here we used an actor-critic approach
based on the DPG algorithm.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Algorithm
• The DPG algorithm maintains a parameterized actor function
𝜇 𝑠 𝜃 𝜇
) which specifies the current policy by deterministically
mapping states to a specific action.
• Critic network (Value): Using the Bellman equation as in DQN
• Actor network (Policy):
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Replay buffer
• As with Q learning, introducing non-linear function approximators means
that convergence is no longer guaranteed. However, such approximators
appear essential in order to learn and generalize on large state spaces.
• NFQCA uses the same update rules as DPG but with neural network function
approximators, uses batch learning for stability, which is intractable for large
networks. A minibatch version of NFQCA which does not reset the policy at
each update, as would be required to scale to large networks.
• Our contribution here is to provide modifications to DPG, inspired by the
success of DQN, which allow it to use neural network function approximators
to learn in large state and action spaces online. We refer to our algorithm as
Deep DPG (DDPG, Algorithm 1).
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Replay buffer
• As in DQN, we used a replay buffer to address these issues. The replay buffer
is a finite sized cache ℛ. Transitions were sampled from the environment
according to the exploration policy and the tuple (𝑠𝑡, 𝑎 𝑡, 𝑟𝑡, 𝑠𝑡+1) was store in
the replay buffer. When the replay buffer was full the oldest samples were
discarded. At each timestep the actor and critic are updated by sampling
minibatch uniformly from the buffer. Because DDPG is an off-policy
algorithm, the replay buffer can be large, allowing the algorithm to benefit
from learning across a set of uncorrelated transitions.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Soft target update
• Directly implementing Q learning (equation 4) with neural networks proved
to be unstable in many environments. Since the network 𝑄(𝑠, 𝑎|𝜃 𝑄
) being
updated is also used in calculating the target value (equation 5), the Q
update is prone to divergence.
• Our solution is similar to the target network used in (Mnih et al., 2013) but
modified for actor-critic and using “soft” target updates, rather than directly
copying the weights.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Soft target update
• We create a copy of the actor and critic networks, 𝑄′(𝑠, 𝑎|𝜃 𝑄′
) and 𝜇′(𝑠|𝜃 𝜇′
)
respectively, that are used for calculating the target values. The weights of
these target networks are then updated by having them slowly track the
learned networks: 𝜽′ ← 𝝉𝜽 + 𝟏 − 𝝉 𝜽′ with 𝝉 ≪ 𝟏. This means that the target
values are constrained to change slowly, greatly improving the stability of
learning.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Batch normalization
• When learning from low dimensional feature vector observations, the
different components of the observation may have different physical units
(for example, positions versus velocities) and the ranges may vary across
environments. This can make it difficult for the network to learn effectively
and may make it difficult to find hyper-parameters which generalize across
environments with different scales of state values.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Batch normalization
• One approach to this problem is to manually scale the features so they are in
similar ranges across environments and units. We address this issue by
adapting a recent technique from deep learning called batch normalization.
This technique normalizes each dimension across the samples in a minibatch
to have unit mean and variance.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Noise process
• A major challenge of learning in continuous action spaces is exploration. An
advantage of off-policies algorithms such as DDPG is that we can treat the
problem of exploration independently from the learning algorithm.
• We constructed an exploration policy 𝜇′
by adding noise sampled from
a noise process 𝒩 to our actor policy 𝜇′ 𝑠𝑡 = 𝜇 𝑠𝑡 𝜃𝑡
𝜇
+ 𝒩, where 𝒩 can be
chosen to suit the environment.
• As detailed in the supplementary materials we used an Ornstein-Uhlenbeck
process (Uhlenbeck & Ornstein, 1930) to generate temporally correlated
exploration for exploration efficiency in physical control problems with inertia
(similar use of autocorrelated noise was introduced in (Wawrzynski, 2015)).
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Noise process
• Ornstein–Uhlenbeck process
• It is a stochastic process that returns to the mean. 𝜃 means a parameter
that indicates how quickly this process returns to the mean and 𝜇 means
the mean. 𝜎 means variability of processes and 𝑊𝑡 means Wiener process.
• Therefore, it is temporarily correlated with previous noise.
• The reason for using the temporally correlated noise process is that it is
more effective when learning in an inertial environment such as physical
control.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Algorithm
• Network structure
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
𝑠
Critic
Network 2
Critic
Network 1
Actor
Network 1
𝑄(𝑠, 𝑎)
𝑎
𝑠′
Algorithm
• Critic network
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
𝐿 =
1
𝑁
෍
𝑖
𝑦𝑖 − 𝑄 𝑠𝑖, 𝑎𝑖 𝜃 𝑄
2
𝑠
Critic
Network 2
Critic
Network 1
Actor
Network 1
𝑄(𝑠, 𝑎)
𝑎
𝑠′
Algorithm
• Critic network
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
𝜕𝑄 𝑠, 𝑎, 𝑤
𝜕𝑎
𝑠
Critic
Network 2
Critic
Network 1
Actor
Network 1
𝑄(𝑠, 𝑎)
𝑎
𝑠′
𝜕𝑎
𝜕𝜃
Algorithm Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Results Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Results Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Conclusion
• The work combines insights from recent advances in deep learning
and reinforcement learning, resulting in an algorithm that robustly
solves challenging problems across a variety of domains with
continuous action spaces, even when using raw pixels for
observations.
• As with most reinforcement learning algorithms, the use of non-
linear function approximators nullifies any convergence guarantees;
however, our experimental results demonstrate that stable learning
without the need for any modifications between environments.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Conclusion
• Interestingly, all of our experiments used substantially fewer steps
of experience than was used by DQN learning to find solutions in
the Atari domain.
• Most notably, as with most model-free reinforcement approaches,
DDPG requires a large number of training episodes to find solutions.
However, we believe that a robust model-free approach may be an
important component of larger systems which may attack these
limitations.
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
References
• https://nervanasystems.github.io/coach/components/agents/poli
cy_optimization/ddpg.html
• https://youtu.be/h2WSVBAC1t4
• https://reinforcement-learning-kr.github.io/2018/06/27/2_dpg/
• https://reinforcement-learning-kr.github.io/2018/06/26/3_ddpg/
Continuous Control With Deep Reinforcement Learning,
Lillicrap et al, 2015.
Thank you!

Mais conteúdo relacionado

Mais procurados

강화 학습 기초 Reinforcement Learning an introduction
강화 학습 기초 Reinforcement Learning an introduction강화 학습 기초 Reinforcement Learning an introduction
강화 학습 기초 Reinforcement Learning an introduction
Taehoon Kim
 

Mais procurados (20)

Deep Reinforcement Learning: Q-Learning
Deep Reinforcement Learning: Q-LearningDeep Reinforcement Learning: Q-Learning
Deep Reinforcement Learning: Q-Learning
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement Learning
 
Reinforcement Learning
Reinforcement LearningReinforcement Learning
Reinforcement Learning
 
Imitation learning tutorial
Imitation learning tutorialImitation learning tutorial
Imitation learning tutorial
 
Introduction of Deep Reinforcement Learning
Introduction of Deep Reinforcement LearningIntroduction of Deep Reinforcement Learning
Introduction of Deep Reinforcement Learning
 
An introduction to deep reinforcement learning
An introduction to deep reinforcement learningAn introduction to deep reinforcement learning
An introduction to deep reinforcement learning
 
RLCode와 A3C 쉽고 깊게 이해하기
RLCode와 A3C 쉽고 깊게 이해하기RLCode와 A3C 쉽고 깊게 이해하기
RLCode와 A3C 쉽고 깊게 이해하기
 
Reinforcement Learning : A Beginners Tutorial
Reinforcement Learning : A Beginners TutorialReinforcement Learning : A Beginners Tutorial
Reinforcement Learning : A Beginners Tutorial
 
【DL輪読会】論文解説:Offline Reinforcement Learning as One Big Sequence Modeling Problem
【DL輪読会】論文解説:Offline Reinforcement Learning as One Big Sequence Modeling Problem【DL輪読会】論文解説:Offline Reinforcement Learning as One Big Sequence Modeling Problem
【DL輪読会】論文解説:Offline Reinforcement Learning as One Big Sequence Modeling Problem
 
Intro to Deep Reinforcement Learning
Intro to Deep Reinforcement LearningIntro to Deep Reinforcement Learning
Intro to Deep Reinforcement Learning
 
Deep deterministic policy gradient
Deep deterministic policy gradientDeep deterministic policy gradient
Deep deterministic policy gradient
 
분산 강화학습 논문(DeepMind IMPALA) 구현
분산 강화학습 논문(DeepMind IMPALA) 구현분산 강화학습 논문(DeepMind IMPALA) 구현
분산 강화학습 논문(DeepMind IMPALA) 구현
 
Reinforcement learning
Reinforcement learningReinforcement learning
Reinforcement learning
 
강화학습 알고리즘의 흐름도 Part 2
강화학습 알고리즘의 흐름도 Part 2강화학습 알고리즘의 흐름도 Part 2
강화학습 알고리즘의 흐름도 Part 2
 
강화 학습 기초 Reinforcement Learning an introduction
강화 학습 기초 Reinforcement Learning an introduction강화 학습 기초 Reinforcement Learning an introduction
강화 학습 기초 Reinforcement Learning an introduction
 
DQN (Deep Q-Network)
DQN (Deep Q-Network)DQN (Deep Q-Network)
DQN (Deep Q-Network)
 
Maximum Entropy Reinforcement Learning (Stochastic Control)
Maximum Entropy Reinforcement Learning (Stochastic Control)Maximum Entropy Reinforcement Learning (Stochastic Control)
Maximum Entropy Reinforcement Learning (Stochastic Control)
 
Kalman Filter Basic
Kalman Filter BasicKalman Filter Basic
Kalman Filter Basic
 
가깝고도 먼 Trpo
가깝고도 먼 Trpo가깝고도 먼 Trpo
가깝고도 먼 Trpo
 
Deep Reinforcement Learning
Deep Reinforcement LearningDeep Reinforcement Learning
Deep Reinforcement Learning
 

Semelhante a Continuous Control with Deep Reinforcement Learning, lillicrap et al, 2015

consistency regularization for generative adversarial networks_review
consistency regularization for generative adversarial networks_reviewconsistency regularization for generative adversarial networks_review
consistency regularization for generative adversarial networks_review
Yoonho Na
 

Semelhante a Continuous Control with Deep Reinforcement Learning, lillicrap et al, 2015 (20)

Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)Continuous control with deep reinforcement learning (DDPG)
Continuous control with deep reinforcement learning (DDPG)
 
Deep Learning in Robotics: Robot gains Social Intelligence through Multimodal...
Deep Learning in Robotics: Robot gains Social Intelligence through Multimodal...Deep Learning in Robotics: Robot gains Social Intelligence through Multimodal...
Deep Learning in Robotics: Robot gains Social Intelligence through Multimodal...
 
Summary of BRAC
Summary of BRACSummary of BRAC
Summary of BRAC
 
Reinforcement Learning Guide For Beginners
Reinforcement Learning Guide For BeginnersReinforcement Learning Guide For Beginners
Reinforcement Learning Guide For Beginners
 
Deep Reinforcement learning
Deep Reinforcement learningDeep Reinforcement learning
Deep Reinforcement learning
 
Proximal Policy Optimization Algorithms, Schulman et al, 2017
Proximal Policy Optimization Algorithms, Schulman et al, 2017Proximal Policy Optimization Algorithms, Schulman et al, 2017
Proximal Policy Optimization Algorithms, Schulman et al, 2017
 
Deep Q-learning from Demonstrations DQfD
Deep Q-learning from Demonstrations DQfDDeep Q-learning from Demonstrations DQfD
Deep Q-learning from Demonstrations DQfD
 
ProxGen: Adaptive Proximal Gradient Methods for Structured Neural Networks (N...
ProxGen: Adaptive Proximal Gradient Methods for Structured Neural Networks (N...ProxGen: Adaptive Proximal Gradient Methods for Structured Neural Networks (N...
ProxGen: Adaptive Proximal Gradient Methods for Structured Neural Networks (N...
 
Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...
Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...
Training Deep Networks with Backprop (D1L4 Insight@DCU Machine Learning Works...
 
Model-Based Reinforcement Learning @NIPS2017
Model-Based Reinforcement Learning @NIPS2017Model-Based Reinforcement Learning @NIPS2017
Model-Based Reinforcement Learning @NIPS2017
 
DDPG algortihm for angry birds
DDPG algortihm for angry birdsDDPG algortihm for angry birds
DDPG algortihm for angry birds
 
DQN Variants: A quick glance
DQN Variants: A quick glanceDQN Variants: A quick glance
DQN Variants: A quick glance
 
consistency regularization for generative adversarial networks_review
consistency regularization for generative adversarial networks_reviewconsistency regularization for generative adversarial networks_review
consistency regularization for generative adversarial networks_review
 
Learning To Run
Learning To RunLearning To Run
Learning To Run
 
Lec3 dqn
Lec3 dqnLec3 dqn
Lec3 dqn
 
Evolving Reinforcement Learning Algorithms, JD. Co-Reyes et al, 2021
Evolving Reinforcement Learning Algorithms, JD. Co-Reyes et al, 2021Evolving Reinforcement Learning Algorithms, JD. Co-Reyes et al, 2021
Evolving Reinforcement Learning Algorithms, JD. Co-Reyes et al, 2021
 
Introduction to cyclical learning rates for training neural nets
Introduction to cyclical learning rates for training neural netsIntroduction to cyclical learning rates for training neural nets
Introduction to cyclical learning rates for training neural nets
 
PR-231: A Simple Framework for Contrastive Learning of Visual Representations
PR-231: A Simple Framework for Contrastive Learning of Visual RepresentationsPR-231: A Simple Framework for Contrastive Learning of Visual Representations
PR-231: A Simple Framework for Contrastive Learning of Visual Representations
 
Deep learning crash course
Deep learning crash courseDeep learning crash course
Deep learning crash course
 
ngboost.pptx
ngboost.pptxngboost.pptx
ngboost.pptx
 

Mais de Chris Ohk

Mais de Chris Ohk (20)

인프콘 2022 - Rust 크로스 플랫폼 프로그래밍
인프콘 2022 - Rust 크로스 플랫폼 프로그래밍인프콘 2022 - Rust 크로스 플랫폼 프로그래밍
인프콘 2022 - Rust 크로스 플랫폼 프로그래밍
 
고려대학교 컴퓨터학과 특강 - 대학생 때 알았더라면 좋았을 것들
고려대학교 컴퓨터학과 특강 - 대학생 때 알았더라면 좋았을 것들고려대학교 컴퓨터학과 특강 - 대학생 때 알았더라면 좋았을 것들
고려대학교 컴퓨터학과 특강 - 대학생 때 알았더라면 좋았을 것들
 
Momenti Seminar - 5 Years of RosettaStone
Momenti Seminar - 5 Years of RosettaStoneMomenti Seminar - 5 Years of RosettaStone
Momenti Seminar - 5 Years of RosettaStone
 
선린인터넷고등학교 2021 알고리즘 컨퍼런스 - Rust로 알고리즘 문제 풀어보기
선린인터넷고등학교 2021 알고리즘 컨퍼런스 - Rust로 알고리즘 문제 풀어보기선린인터넷고등학교 2021 알고리즘 컨퍼런스 - Rust로 알고리즘 문제 풀어보기
선린인터넷고등학교 2021 알고리즘 컨퍼런스 - Rust로 알고리즘 문제 풀어보기
 
Momenti Seminar - A Tour of Rust, Part 2
Momenti Seminar - A Tour of Rust, Part 2Momenti Seminar - A Tour of Rust, Part 2
Momenti Seminar - A Tour of Rust, Part 2
 
Momenti Seminar - A Tour of Rust, Part 1
Momenti Seminar - A Tour of Rust, Part 1Momenti Seminar - A Tour of Rust, Part 1
Momenti Seminar - A Tour of Rust, Part 1
 
Adversarially Guided Actor-Critic, Y. Flet-Berliac et al, 2021
Adversarially Guided Actor-Critic, Y. Flet-Berliac et al, 2021Adversarially Guided Actor-Critic, Y. Flet-Berliac et al, 2021
Adversarially Guided Actor-Critic, Y. Flet-Berliac et al, 2021
 
Agent57: Outperforming the Atari Human Benchmark, Badia, A. P. et al, 2020
Agent57: Outperforming the Atari Human Benchmark, Badia, A. P. et al, 2020Agent57: Outperforming the Atari Human Benchmark, Badia, A. P. et al, 2020
Agent57: Outperforming the Atari Human Benchmark, Badia, A. P. et al, 2020
 
Trust Region Policy Optimization, Schulman et al, 2015
Trust Region Policy Optimization, Schulman et al, 2015Trust Region Policy Optimization, Schulman et al, 2015
Trust Region Policy Optimization, Schulman et al, 2015
 
GDG Gwangju DevFest 2019 - <하스스톤> 강화학습 환경 개발기
GDG Gwangju DevFest 2019 - <하스스톤> 강화학습 환경 개발기GDG Gwangju DevFest 2019 - <하스스톤> 강화학습 환경 개발기
GDG Gwangju DevFest 2019 - <하스스톤> 강화학습 환경 개발기
 
[RLKorea] <하스스톤> 강화학습 환경 개발기
[RLKorea] <하스스톤> 강화학습 환경 개발기[RLKorea] <하스스톤> 강화학습 환경 개발기
[RLKorea] <하스스톤> 강화학습 환경 개발기
 
[NDC 2019] 하스스톤 강화학습 환경 개발기
[NDC 2019] 하스스톤 강화학습 환경 개발기[NDC 2019] 하스스톤 강화학습 환경 개발기
[NDC 2019] 하스스톤 강화학습 환경 개발기
 
C++20 Key Features Summary
C++20 Key Features SummaryC++20 Key Features Summary
C++20 Key Features Summary
 
[델리만주] 대학원 캐슬 - 석사에서 게임 프로그래머까지
[델리만주] 대학원 캐슬 - 석사에서 게임 프로그래머까지[델리만주] 대학원 캐슬 - 석사에서 게임 프로그래머까지
[델리만주] 대학원 캐슬 - 석사에서 게임 프로그래머까지
 
디미고 특강 - 개발을 시작하려는 여러분에게
디미고 특강 - 개발을 시작하려는 여러분에게디미고 특강 - 개발을 시작하려는 여러분에게
디미고 특강 - 개발을 시작하려는 여러분에게
 
청강대 특강 - 프로젝트 제대로 해보기
청강대 특강 - 프로젝트 제대로 해보기청강대 특강 - 프로젝트 제대로 해보기
청강대 특강 - 프로젝트 제대로 해보기
 
[NDC 2018] 유체역학 엔진 개발기
[NDC 2018] 유체역학 엔진 개발기[NDC 2018] 유체역학 엔진 개발기
[NDC 2018] 유체역학 엔진 개발기
 
My Way, Your Way
My Way, Your WayMy Way, Your Way
My Way, Your Way
 
Re:Zero부터 시작하지 않는 오픈소스 개발
Re:Zero부터 시작하지 않는 오픈소스 개발Re:Zero부터 시작하지 않는 오픈소스 개발
Re:Zero부터 시작하지 않는 오픈소스 개발
 
[9XD] Introduction to Computer Graphics
[9XD] Introduction to Computer Graphics[9XD] Introduction to Computer Graphics
[9XD] Introduction to Computer Graphics
 

Último

Último (20)

The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 

Continuous Control with Deep Reinforcement Learning, lillicrap et al, 2015

  • 1. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015. 옥찬호 utilForever@gmail.com
  • 2. Introduction • Deep Q Network solve problems... • High-dimensional observation spaces • Discrete and low-dimensional action spaces • Limitation of DQN • For example, a 7 degree of freedom system with the coarsest discretization 𝑎𝑖 ∈ −𝑘, 0, 𝑘 for each joint leads to an action space with dimensionality: 37 = 2187. → How to solve high-dimensional action spaces problem? Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 3. Introduction • Use Deterministic Policy Gradient (DPG) algorithm! • However, a naive application of this actor-critic method with neural function approximators is unstable for challenging problems. → How? • Combine actor-critic approach with insights from Deep Q Network (DQN) • Use replay buffer to minimize correlations between samples. • Use target Q network to give consistent targets. • Use batch normalization. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 4. Background • The action-value function is used in many reinforcement learning algorithms. It describes the expected return after taking an action 𝑎 𝑡 in state 𝑠𝑡 and thereafter following policy 𝜋: • Action-value function using Bellman-equation • If target policy is deterministic, Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 5. Background • Q-learning, a commonly used off-policy algorithm, uses the greedy policy 𝜇 𝑠 = argmax 𝑎 𝑄(𝑠, 𝑎). We consider function approximators parameterized by 𝜃 𝑄 , which we optimize by minimizing the loss: Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 6. Algorithm • It is not possible to straightforwardly apply Q-learning to continuous action space. • In continuous spaces finding the greedy policy requires an optimization of 𝑎 𝑡 at every timestep. 𝜇 𝑠 = argmax 𝑎 𝑄(𝑠, 𝑎) • This optimization is too slow to be practical with large, unconstrained function approximators and nontrivial action spaces. • Instead, here we used an actor-critic approach based on the DPG algorithm. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 7. Algorithm • The DPG algorithm maintains a parameterized actor function 𝜇 𝑠 𝜃 𝜇 ) which specifies the current policy by deterministically mapping states to a specific action. • Critic network (Value): Using the Bellman equation as in DQN • Actor network (Policy): Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 8. Replay buffer • As with Q learning, introducing non-linear function approximators means that convergence is no longer guaranteed. However, such approximators appear essential in order to learn and generalize on large state spaces. • NFQCA uses the same update rules as DPG but with neural network function approximators, uses batch learning for stability, which is intractable for large networks. A minibatch version of NFQCA which does not reset the policy at each update, as would be required to scale to large networks. • Our contribution here is to provide modifications to DPG, inspired by the success of DQN, which allow it to use neural network function approximators to learn in large state and action spaces online. We refer to our algorithm as Deep DPG (DDPG, Algorithm 1). Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 9. Replay buffer • As in DQN, we used a replay buffer to address these issues. The replay buffer is a finite sized cache ℛ. Transitions were sampled from the environment according to the exploration policy and the tuple (𝑠𝑡, 𝑎 𝑡, 𝑟𝑡, 𝑠𝑡+1) was store in the replay buffer. When the replay buffer was full the oldest samples were discarded. At each timestep the actor and critic are updated by sampling minibatch uniformly from the buffer. Because DDPG is an off-policy algorithm, the replay buffer can be large, allowing the algorithm to benefit from learning across a set of uncorrelated transitions. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 10. Soft target update • Directly implementing Q learning (equation 4) with neural networks proved to be unstable in many environments. Since the network 𝑄(𝑠, 𝑎|𝜃 𝑄 ) being updated is also used in calculating the target value (equation 5), the Q update is prone to divergence. • Our solution is similar to the target network used in (Mnih et al., 2013) but modified for actor-critic and using “soft” target updates, rather than directly copying the weights. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 11. Soft target update • We create a copy of the actor and critic networks, 𝑄′(𝑠, 𝑎|𝜃 𝑄′ ) and 𝜇′(𝑠|𝜃 𝜇′ ) respectively, that are used for calculating the target values. The weights of these target networks are then updated by having them slowly track the learned networks: 𝜽′ ← 𝝉𝜽 + 𝟏 − 𝝉 𝜽′ with 𝝉 ≪ 𝟏. This means that the target values are constrained to change slowly, greatly improving the stability of learning. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 12. Batch normalization • When learning from low dimensional feature vector observations, the different components of the observation may have different physical units (for example, positions versus velocities) and the ranges may vary across environments. This can make it difficult for the network to learn effectively and may make it difficult to find hyper-parameters which generalize across environments with different scales of state values. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 13. Batch normalization • One approach to this problem is to manually scale the features so they are in similar ranges across environments and units. We address this issue by adapting a recent technique from deep learning called batch normalization. This technique normalizes each dimension across the samples in a minibatch to have unit mean and variance. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 14. Noise process • A major challenge of learning in continuous action spaces is exploration. An advantage of off-policies algorithms such as DDPG is that we can treat the problem of exploration independently from the learning algorithm. • We constructed an exploration policy 𝜇′ by adding noise sampled from a noise process 𝒩 to our actor policy 𝜇′ 𝑠𝑡 = 𝜇 𝑠𝑡 𝜃𝑡 𝜇 + 𝒩, where 𝒩 can be chosen to suit the environment. • As detailed in the supplementary materials we used an Ornstein-Uhlenbeck process (Uhlenbeck & Ornstein, 1930) to generate temporally correlated exploration for exploration efficiency in physical control problems with inertia (similar use of autocorrelated noise was introduced in (Wawrzynski, 2015)). Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 15. Noise process • Ornstein–Uhlenbeck process • It is a stochastic process that returns to the mean. 𝜃 means a parameter that indicates how quickly this process returns to the mean and 𝜇 means the mean. 𝜎 means variability of processes and 𝑊𝑡 means Wiener process. • Therefore, it is temporarily correlated with previous noise. • The reason for using the temporally correlated noise process is that it is more effective when learning in an inertial environment such as physical control. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 16. Algorithm • Network structure Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015. 𝑠 Critic Network 2 Critic Network 1 Actor Network 1 𝑄(𝑠, 𝑎) 𝑎 𝑠′
  • 17. Algorithm • Critic network Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015. 𝐿 = 1 𝑁 ෍ 𝑖 𝑦𝑖 − 𝑄 𝑠𝑖, 𝑎𝑖 𝜃 𝑄 2 𝑠 Critic Network 2 Critic Network 1 Actor Network 1 𝑄(𝑠, 𝑎) 𝑎 𝑠′
  • 18. Algorithm • Critic network Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015. 𝜕𝑄 𝑠, 𝑎, 𝑤 𝜕𝑎 𝑠 Critic Network 2 Critic Network 1 Actor Network 1 𝑄(𝑠, 𝑎) 𝑎 𝑠′ 𝜕𝑎 𝜕𝜃
  • 19. Algorithm Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 20. Results Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 21. Results Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 22. Conclusion • The work combines insights from recent advances in deep learning and reinforcement learning, resulting in an algorithm that robustly solves challenging problems across a variety of domains with continuous action spaces, even when using raw pixels for observations. • As with most reinforcement learning algorithms, the use of non- linear function approximators nullifies any convergence guarantees; however, our experimental results demonstrate that stable learning without the need for any modifications between environments. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 23. Conclusion • Interestingly, all of our experiments used substantially fewer steps of experience than was used by DQN learning to find solutions in the Atari domain. • Most notably, as with most model-free reinforcement approaches, DDPG requires a large number of training episodes to find solutions. However, we believe that a robust model-free approach may be an important component of larger systems which may attack these limitations. Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.
  • 24. References • https://nervanasystems.github.io/coach/components/agents/poli cy_optimization/ddpg.html • https://youtu.be/h2WSVBAC1t4 • https://reinforcement-learning-kr.github.io/2018/06/27/2_dpg/ • https://reinforcement-learning-kr.github.io/2018/06/26/3_ddpg/ Continuous Control With Deep Reinforcement Learning, Lillicrap et al, 2015.