4000-520-616
欢迎来到免疫在线!(蚂蚁淘生物旗下平台)  请登录 |  免费注册 |  询价篮
主营:原厂直采,平行进口,授权代理(蚂蚁淘为您服务)
咨询热线电话
4000-520-616
当前位置: 首页 > 新闻动态 >
新闻详情
RL_Learning
来自 : article.itxueyuan.com/wmGAv 发布时间:2021-03-24

标签(空格分隔): RL_learning

OpenAI Spinning Up原址

states and observations (状态和观测)action spaces(动作空间)policies(策略)trajectories(运动轨迹)different formulations of return(不同形式的奖励)the RL optimization problem(RL的优化问题)value functions()States and Observations

states(状态) s 是一个对世界的完整的描述,states反映了世界的真实情况。
observation(观测) o 是对状态的一个部分描述,可能缺少某些信息。
在 deep RL 中 通常用 real-valued vector, matrix, or higher-order tensor 来描述。对于图像而言观测值是像素矩阵,对于机器人而言观测是它的速度和角度等。
如果agent可以观察到关于环境的所有的状态,那么我们称之为fully observed 。如果agent只能看到部分信息,我们称之为partially observed。
注意:理论上action的选择取决与states,但是实际上我们的action取决于observation,因为状态对于agent来讲是不可获得的。

Action Spaces

不同的环境可以用不同的动作来描述。在给定环境里的所有的可执行动作就是动作空间。在一些环境中,比如Atari 和 Go,是离散的运动空间,其中agent可执行的动作只有有限个。在一些其他环境中,比如真是世界中的机器人的控制,它的运动空间是连续的。在这样的空间中动作通常是 real-valued vector。

对于不同的action spaces 会对RL的算法有很大的影响,一些算法在某种场景下可以直接应用,在其他的场景下可能需要全部重新改编。

Policies

Policies(策略)就是agent用来选择action的规则,它可以是明确的,通常表示为$mu $:
[a_t=mu(s_t),]
或者是随机的,通常表示为(pi):
[a_tsimpi(cdotmid s_t).]
在deep RL中我们其实是在处理参数化的策略问题:
这些策略的输出是一些可以计算的函数可以控制agent的行为,而这些函数是依赖于一些参数的,例如神经网络中的权重和偏置。我们对这些参数进行优化,从而达到改变行为的目的。
通常通过(theta)或者(phi)来表示这种参数化的策略,通常写成这种形式
[a_t=mu_theta(s_t)\\a_tsimpi_theta(cdotmid s_t).]

Deterministic Polices

下面是Deterministic Police的例子

obs = tf.placeholder(shape=(None, obs_dim), dtype=tf.float32)net = mlp(obs, hidden_dims=(64,64), activation=tf.tanh)actions = tf.layers.dense(net, units=act_dim, activation=None)

where mlp is a function that stacks multiple dense layers on top of each other with the given sizes and activation.

Stochastic Policies

在深度学习中最常见的两种随机策略是Categorical Policies(分类策略)和diagonal Gaussian policies对角高斯策略。

categorical policies适用于离散的动作空间,而diagonal Gaussian polices使用于连续的动作空间。

两个运算对于训练和使用随机策略来说至关重要:

sampling actions from policy,and computing log likelihoods of particular actions,(logpi_theta(a|s)).

一个是如何根据策略选取动作,一个是计算选取指定动作的概率。

Categorical Policies

Categorical Policies类似一个对离散动作进分类的分类器,对它构建神经网络的方法与构建分类器的方法是一样的。输入是observation,然后是一些层(一般来说是convolutional or densely-connected,取决于输入的种类)。最后是一个线性层给出选择action的概率。
Sampling.给出每个动作的概率值
Log-Likehood.概率的对数形式,
[logpi_theta (a|s)=log[P_theta(s)]_a.]

Diagonal Gaussian Policies

多元高斯分布通过平均值向量(mu)和协方差矩阵(Sigma)来描述,Diagonal Gussian distribution(对角高斯分布)是一个特殊的情况,它的协方差矩阵只有对角线上的元素,所以我们可以通过一个向量来表示。

Diagonal Gaussian policy通常通过神经网络将observation映射到mean action,(mu_theta (s))。
用对数的形式可以让取值范围是((-infty,infty)),而且对数形式的值都是非负的。这回让我们训练起来简单很多,不用去考虑取值范围和值域的范围。

Sampling.通过输入mean action(mu_theta(s)),标准差 (sigma_theta(s))和成球面高斯分布的噪声 ((z sim mathcal{N}(0, I))),可以得到action sample [a = mu_{theta}(s) + sigma_{theta}(s) odot z,]
其中(odot)表示两个向量元素的乘积。
Log-Likelihood. The log-likelihood of a (k) -dimensional action (a), for a diagonal Gaussian with mean (mu = mu_{theta}(s)) and standard deviation (sigma = sigma_{theta}(s)), is given by
[log pi_{theta}(a|s) = -frac{1}{2}left(sum_{i=1}^k left(frac{(a_i - mu_i)^2}{sigma_i^2} + 2 log sigma_i right) + k log 2pi right).]

Trajectories

A trajectory (tau) is a sequence of states and actions in the world,
[tau = (s_0, a_0, s_1, a_1, ...).]The very first state of the world, (s_0), is randomly sampled from the start-state distribution, sometimes denoted by(rho_0):

[s_0simrho_0(cdot)]

State transitions (what happens to the world between the state at time (t), (s_t), and the state at (t+1), (s_{t+1}), are governed by the natural laws of the environment, and depend on only the most recent action, (a_t). They can be either deterministic,

[s_{t+1} = f(s_t, a_t)]

or stochastic,[s_{t+1} sim P(cdot|s_t, a_t).]Actions come from an agent according to its policy.

Reward and Return

The reward function R is critically important in reinforcement learning. It depends on the current state of the world, the action just taken, and the next state of the world:
[r_t = R(s_t, a_t, s_{t+1})]

although frequently this is simplified to just a dependence on the current state, (r_t = R(s_t)), or state-action pair (r_t = R(s_t,a_t)).

所谓奖励就是一个关于此刻的状态,此刻做出的动作以及下一刻的状态的函数,更普遍的将其简化为只关于此刻状态或者此刻状态和此刻做出动作的一个函数。

The goal of the agent is to maximize some notion of cumulative reward over a trajectory, but this actually can mean a few things. We’ll notate all of these cases with (R(tau)), and it will either be clear from context which case we mean, or it won’t matter (because the same equations will apply to all cases)

agent的目的就是最大化累计的reward,所以我们用(R(tau))来表示agent所获得的奖励。

One kind of return is the finite-horizon undiscounted return, which is just the sum of rewards obtained in a fixed window of steps:
[R(tau)=sum_{t=0}^Tr_t.]

Another kind of return is the infinite-horizon discounted return, which is the sum of all rewards ever obtained by the agent, but discounted by how far off in the future they’re obtained. This formulation of reward includes a discount factor(gamma in(0,1)):
[R(tau)=sum_{t=0}^infty gamma^t r_t.]

Why would we ever want a discount factor, though? Don’t we just want to get all rewards? We do, but the discount factor is both intuitively appealing and mathematically convenient. On an intuitive level: cash now is better than cash later. Mathematically: an infinite-horizon sum of rewards may not converge to a finite value, and is hard to deal with in equations. But with a discount factor and under reasonable conditions, the infinite sum converges.

在处理reward的时候我们有两种方式,一种是对每一步的reward进行无衰减的相加,但是求和的次数是有限的。还有一种方式就是无限求和次数,但是对每次的奖励进行衰减,衰减系数在0~1之间。

那么为什么我们要衰减因子呢?难道我们不想要全部的奖励吗?不是的我们想要,我们这么做是为了数学上的方便。如果我们对一个数无限的累加,那这个值也会变成无限大,如果我们加入了衰减因子那么对这个值进行无限累加,它会收敛到一个值上。

The RL Problem

Whatever the choice of return measure (whether infinite-horizon discounted, or finite-horizon undiscounted), and whatever the choice of policy, the goal in RL is to select a policy which maximizes expected return when the agent acts according to it.

不管我们采用什么return的形式以及策略,RL的最终目的是让agent根据策略选择action来最大化预期收益。

To talk about expected return, we first have to talk about probability distributions over trajectories.

Let’s suppose that both the environment transitions and the policy are stochastic. In this case, the probability of a (T)-step trajectory is:

[P(tau|pi)=rho_0(s_0) prod_{t=0}^{T-1} P(s_{t+1}|s_t,a_t)pi(a_t|s_t).]

The expected return (for whichever measure), denoted by (J(pi)), is then:
[J(pi)=int_tau P(tau|pi)R(tau)=underset {tau simpi}{E}[R(tau)]. ]

The central optimization problem in RL can then be expressed by
[pi^*=argmax_pi J(pi),]

with (pi^*) being the optimal policy.

Value Functions

It’s often useful to know the value of a state, or state-action pair. By value, we mean the expected return if you start in that state or state-action pair, and then act according to a particular policy forever after. Value functions are used, one way or another, in almost every RL algorithm.

知道某个状态或者状态-动作对的值对我们来说很有用,那么这个值就是在这个状态或者状态-动作对根据策略选择动作之后得到的期望收益。在所有的RL算法中都会用到一个或者几个Value Functions.

主要有以下四种Value Function

The On-Policy Value Function, (V^{pi}(s)), which gives the expected return if you start in state (s) and always act according to policy (pi):

[V^pi(s)=underset {tau sim pi} {E} [R(tau)|s_0 = s]]

The On-Policy Action-Value Function, Q^{pi}(s,a), which gives the expected return if you start in state s, take an arbitrary action a (which may not have come from the policy), and then forever after act according to policy pi:

[Q^pi(s,a)=underset{tau sim pi} {E} [R(tau)|s_0=s,a_0=a]]

The Optimal Value Function, V^*(s), which gives the expected return if you start in state s and always act according to the optimal policy in the environment:

[V^*(s) = max_{pi} underset {tau sim pi}{E} [R(tau) |s_0 = s]]

The Optimal Action-Value Function, Q^*(s,a), which gives the expected return if you start in state s, take an arbitrary action a, and then forever after act according to the optimal policy in the environment:

[Q^*(s,a) = max_{pi} underset {tau sim pi} {E} [R(tau) | s_0 = s, a_0 = a]]

The Optimal Q-function and the Optimal Action

There is an important connection between the optimal action-value function (Q^*(s,a)) and the action selected by the optimal policy. By definition, (Q^*(s,a)) gives the expected return for starting in state (s), taking (arbitrary) action (a), and then acting according to the optimal policy forever after.

The optimal policy in s will select whichever action maximizes the expected return from starting in (s). As a result, if we have (Q^*), we can directly obtain the optimal action, (a^*(s)), via

[a^*(s) = arg {max_{a}{Q^*(s,a)}}.]

Note: there may be multiple actions which maximize (Q^*(s,a)), in which case, all of them are optimal, and the optimal policy may randomly select any of them. But there is always an optimal policy which deterministically selects an action.

Bellman Equations

All four of the value functions obey special self-consistency equations called Bellman equations. The basic idea behind the Bellman equations is this:

The value of your starting point is the reward you expect to get from being there, plus the value of wherever you land next.

The Bellman equations for the on-policy value functions are
[V^pi(s) = underset{underset{s\' sim P}{a sim pi}}{E}[r(s,a)+gamma V^pi(s\')],\\ Q^pi(s,a) = underset {s\' sim p}{E}[r(s,a)+gamma underset {a\'sim pi} {E}[Q^pi(s\',a\')]],]

where (s\'sim P) is shorthand for (s\'sim P(cdot |s,a)),indicating that the next state (s\') is sampled from the environment’s transition rules; $a sim pi $ is shorthand for (a sim pi(cdot|s)); and (a\' sim pi) is shorthand for (a\' sim pi(cdot|s\')).

The Bellman equations for the optimal value functions are

[V^*(s) = max_a underset{s\' sim P}{E}[r(s,a)+gamma V^*(s\')],\\ Q^*(s,a) = underset {s\' sim p}{E}[r(s,a)+gamma max_{a\'}[Q^* (s\',a\')]].]

The crucial difference between the Bellman equations for the on-policy value functions and the optimal value functions, is the absence or presence of the max over actions. Its inclusion reflects the fact that whenever the agent gets to choose its action, in order to act optimally, it has to pick whichever action leads to the highest value.

Note

The term “Bellman backup” comes up quite frequently in the RL literature. The Bellman backup for a state, or state-action pair, is the right-hand side of the Bellman equation: the reward-plus-next-value.

Advantage Functions

Sometimes in RL, we don’t need to describe how good an action is in an absolute sense, but only how much better it is than others on average. That is to say, we want to know the relative advantage of that action. We make this concept precise with the advantage function.

The advantage function (A^{pi}(s,a)) corresponding to a policy (pi) describes how much better it is to take a specific action a in state s, over randomly selecting an action according to (pi(cdot|s)), assuming you act according to (pi) forever after. Mathematically, the advantage function is defined by

[A^{pi}(s,a) = Q^{pi}(s,a) - V^{pi}(s).]

Note
advantage function is crucially important to policy gradient methods.

内容来源于网络如有侵权请私信删除

本文链接: http://farrl.immuno-online.com/view-681273.html

发布于 : 2021-03-24 阅读(0)