强化学习的数学原理(英文版)

2025-10-16 194 10/16

强化学习的数学原理(英文版)

作者:赵世钰 著

页数:312

出版社:清华大学出版社

出版日期:2024

ISBN:9787302658528

高清校对版pdf(带目录)

前往页尾底部查看PDF电子书

内容简介

本书从强化学习最基本的概念开始介绍, 将介绍基础的分析工具, 包括贝尔曼公式和贝尔曼最
优公式, 然后推广到基于模型的和无模型的强化学习算法, 最后推广到基于函数逼近的强化学习方
法。本书强调从数学的角度引入概念、分析问题、分析算法, 并不强调算法的编程实现。本书不要求
读者具备任何关于强化学习的知识背景, 仅要求读者具备一定的概率论和线性代数的知识。如果读者
已经具备强化学习的学习基础, 本书可以帮助读者更深入地理解一些问题并提供新的视角。
本书面向对强化学习感兴趣的本科生、研究生、研究人员和企业或研究所的从业者。 

作者简介

赵世钰,西湖大学工学院AI分支特聘研究员,智能无人系统实验室负责人,国家海外高层次人才引进计划青年项目获得者;本硕毕业于北京航空航天大学,博士毕业于新加坡国立大学,曾任英国谢菲尔德大学自动控制与系统工程系Lecturer;致力于研发有趣、有用、有挑战性的下一代机器人系统,重点关注多机器人系统中的控制、决策与感知等问题。

本书特色

·从零开始到透彻理解,知其然并知其所以然;
·本书在GitHub收获2000 星;
·课程视频全网播放超过80万;
·国内外读者反馈口碑爆棚;
·教材、视频、课件三位一体。

目录

Overview of this BookChapter 1 Basic Concepts1.1 A grid world example1.2 State and action1.3 State transition1.4 Policy1.5 Reward1.6 Trajectories, returns, and episodes1.7 Markov decision processes1.8 Summary1.9 Q&AChapter 2 State Values and the Bellman Equation2.1 Motivating example 1: Why are returns important?2.2 Motivating example 2: How to calculate returns?2.3 State values2.4 The Bellman equation2.5 Examples for illustrating the Bellman equation2.6 Matrix-vector form of the Bellman equation2.7 Solving state values from the Bellman equation2.7.1 Closed-form solution2.7.2 Iterative solution2.7.3 Illustrative examples2.8 From state value to action value2.8.1 Illustrative examples2.8.2 The Bellman equation in terms of action values2.9 Summary2.10 Q&AChapter 3 Optimal State Values and the Bellman Optimality Equation3.1 Motivating example: How to improve policies?3.2 Optimal state values and optimal policies3.3 The Bellman optimality equation3.3.1 Maximization of the right-hand side of the BOE3.3.2 Matrix-vector form of the BOE3.3.3 Contraction mapping theorem3.3.4 Contraction property of the right-hand side of the BOE3.4 Solving an optimal policy from the BOE3.5 Factors that influence optimal policies3.6 Summary3.7 Q&AChapter 4 Value Iteration and Policy Iteration4.1 Value iteration4.1.1 Elementwise form and implementation4.1.2 Illustrative examples4.2 Policy iteration4.2.1 Algorithm analysis4.2.2 Elementwise form and implementation4.2.3 Illustrative examples4.3 Truncated policy iteration4.3.1 Comparing value iteration and policy iteration4.3.2 Truncated policy iteration algorithm4.4 Summary4.5 Q&AChapter 5 Monte Carlo Methods5.1 Motivating example: Mean estimation5.2 MC Basic: The simplest MC-based algorithm5.2.1 Converting policy iteration to be model-free5.2.2 The MC Basic algorithm5.2.3 Illustrative examples5.3 MC Exploring Starts5.3.1 Utilizing samples more efficiently5.3.2 Updating policies more efficiently5.3.3 Algorithm description5.4 MC ∈-Greedy: Learning without exploring starts5.4.1 ∈-greedy policies5.4.2 Algorithm description5.4.3 Illustrative examples5.5 Exploration and exploitation of ∈-greedy policies5.6 Summary5.7 Q&AChapter 6 Stochastic Approximation6.1 Motivating example: Mean estimation6.2 Robbins-Monro algorithm6.2.1 Convergence properties6.2.2 Application to mean estimation6.3 Dvoretzky's convergence theorem6.3.1 Proof of Dvoretzky's theorem6.3.2 Application to mean estimation6.3.3 Application to the Robbins-Monro theorem6.3.4 An extension of Dvoretzky's theorem6.4 Stochastic gradient descent6.4.1 Application to mean estimation6.4.2 Convergence pattern of SGD6.4.3 A deterministic formulation of SGD6.4.4 BGD, SGD, and mini-batch GD6.4.5 Convergence of SGD6.5 Summary6.6 Q&AChapter 7 Temporal-Difference Methods7.1 TD learning of state values7.1.1 Algorithm description7.1.2 Property analysis7.1.3 Convergence analysis7.2 TD learning of action values: Sarsa7.2.1 Algorithm description7.2.2 Optimal policy learning via Sarsa7.3 TD learning of action values: n-step Sarsa7.4 TD learning of optimal action values: Q-learning7.4.1 Algorithm description7.4.2 Off-policy vs. on-policy7.4.3 Implementation7.4.4 Illustrative examples7.5 A unifed viewpoint7.6 Summary7.7 Q&AChapter 8 Value Function Approximation8.1 Value representation: From table to function8.2 TD learning of state values with function approximation8.2.1 O
PDF更新中
- THE END -

非特殊说明,本博所有文章均为博主原创。