2. Markov Decision Process

October 7, 2018 • Busa Victor

In the first two articles of this serie we have learnt how to use the gym package from OpenAI. We have already trained an agent on the very simple FrozenLake-v0 environment and we have seen that, after training, the agent were able to reach the Goal state (even in the stochastic environment!). In this new article we will tackle the same problem, but this time we will use 2 other algorithms. The first one is called Policy iteration and the later is named Value iteration. As usual the code for this article is available here.

Markov Decision Process Definition

A Markov Decision Process is defined by:

  1. a set of states: $S = ${$s_0, s_1, \dots, s_n$}
  2. a set of available actions: $A = $ {$a_0, a_1, \dots, a_n$}
  3. A state transition function: $P(s’ |\; s, a)$
  4. A reward function: $R(s, a)$ (sometimes also written $R(s, a, s’)$)

It is quite abstract, and when it is the case, we should explain it using a figure. Let me just take my most beautiful pencil. I scribble a little something and I come back. Figure 1 shows $P(s’ |\; s, a)$ and $R(s,a)$ when our agent is in the state $10$ ($s=10$) of the stochastic FrozenLake-v0 environment.

Markov Decision Process
Figure 1: show $P(s, a, s')$ and $R(s, a)$ for $s=10$ for the stochastic FrozenLake-v0 environment

According to the previous figure, we can see that, if the agent is in state $10$ and takes action:

  • Left: it receives an average reward of: $1/3 \times 0 + 1/3 \times 0 + 1/3 \times 0 = 0$
  • Down: it receives an average reward of: $1/3 \times 0 + 1/3 \times 0 + 1/3 \times 0 = 0$
  • Right: it receives an average reward of: $1/3 \times -1 + 1/3 \times 0 + 1/3 \times 0 = -1/3$
  • Up: it receives an average reward of: $1/3 \times 0 + 1/3 \times 0 + 1/3 \times 0 = 0$

So $R(s, a)$ for the state $s=10$ will be a vector of size $4$ (4 different actions) whose value will be the average reward obtained for each action (here the action refers to the action column in the table from figure 1 and NOT to the real actions taken).

In python $R(s=10, a)$ would then be:

# mapping between action and index of the action
# LEFT = 0   DOWN = 1   RIGHT = 2  UP = 3
n_states, n_actions = env.observation_space.n, env.action_space.n
R = np.zeros((n_states, n_actions))
R[10, :] = [0, 0, -0.33, 0]

Mathematically, $R(s, a)$ is defined by:

\begin{equation} R(s, a) = \sum\limits_{(r, s’) \in (R, S)} r P(s’ | s, a) \end{equation}

where $R$ designs here the immediate reward i.e the reward column in the previous table. It is, by no mean the average reward: $R \neq R(s,a)$. Let’s use the previous formula to compute $R(10, 2)$ (average reward received by our agent if he is in state $10$ and choose to go RIGHT):

\[\begin{align*} R(10, 2) &= \sum\limits_{(r,s) \in ((-1, 0, 0), (11, 14, 6))} r P(s' | s, a) \\\\ &= -1 \times P(11 |\; 10, 2) + 0 \times P(14 |\; 10, 2) + 0 \times P(6 |\; 10, 2) \\\\ &= -1/3 \end{align*}\]

What does the state-transition function $P(s’ |\; s, a)$ represent? Let’s again take an example. If I assume my agent is in state $s=10$ and that it chooses action $a=2$ (RIGHT action) then according to the table in figure 1 we can see that our agent ends:

  • in state $11$ with probability $1/3$
  • in state $14$ with probability $1/3$
  • in state $6$ with probability $1/3$

Hence, in python we can write:

n_states, n_actions = env.observation_space.n, env.action_space.n
P = np.zeros((n_states, n_actions, n_states))
P[10, 2, 11] = 0.33
P[10, 2, 14] = 0.33
P[10, 2, 6] = 0.33

the take off message is that:

  • $R(s,a)$ represents the average reward that our agent obtains if he is in state $s$ and select action $a$
  • $P(s’ |\; s, a)$ is the probability that our agent ends in state $s’$ knowing that he is in state $s$ and that he chose action $a$

Compute R(s,a) and P(s’ | s, a) in python

In python if we want to recover $R(s,a)$, $\forall (s,a) \in (S,A)$ we can write:

def getReward(env):
    n_states, n_actions = env.observation_space.n, env.action_space.n    
    R = np.zeros((n_states, n_actions))

    for s in range(n_states):
        for a, moves in env.env.P[s].items():
            for possible_move in moves:
                _, _, r, _ = possible_move
                R[s, a] += r
    
    # divide the sum of reward by 1/3 because we have 1/3 probability to
    # take each action
    R /= 3
    return R

In the same fashion we can implement a function to construct the $P(s’|\; s, a)$ tensor:

def getProb(env):
    n_states, n_actions = env.observation_space.n, env.action_space.n    
    P = np.zeros((n_states, n_actions, n_states))

    for s in range(n_states):
        for a in range(n_actions):
            for moves in env.env.P[s][a]:
                _, next_s, _, _ = moves
                P[s, a, next_s] += 1
    
    # idem, divide by 1/3 because we have 1/3 probability to
    # go in each direction
    P /= 3
    return P

Policy iteration

A policy is a mapping from states to probabilities of selecting each possible action. Usually a policy is denoted $\pi$ and the optimal policy is denoted $\pi^{\star}$. A policy is nothing else but a probability distribution function, i.e that $\pi(a|\; s)$ is the probability that our agent choose action $a$ in state $s$. $\pi$ can also be deterministic. In this case it is denoted $\pi(s)$ and it represents the action chosen in state $s$.

The goal of the policy iteration algorithm is then to iterate and improve a policy at each iteration. It can be break down into 3 phases:

  1. initialize the policy with arbitrary values
  2. iterate until our policy stops improving. At each iteration:
    1. evaluate our new policy
    2. if the new policy is better than the previous one, replace it

The question is: how can I find a better policy at each iteration? This article and this serie focus more on the implementation part than on the mathematical part. Also, I don’t want to just copy and paste the proof that you can find in the great book Reinforcement Learning: An Introduction by Richard S.Sutton and Andrew G. Barto (Section 4.2). So, here we will just implement the Policy iteration algorithm (Section 4.3) in Python.

To replace the current policy with the newly created policy we need first and foremost to create a function that will evaluate the current policy. we will call this function policy_evaluation. This function returns the state value function i.e. a value for each state. Figure 2 shows an example of state value function.

Value function
Figure 2: Example of value function for the FrozenLake-v0 environment

Let’s assume our agent is in state $s=10$. Our agent can choose to go in any of the four cardinal directions. According to the previous figure we can see that our agent will prefer to go DOWN because by going DOWN it will receive the greatest value ($0.5442$). The question is then: given a policy how can I recover the state value function?

To answer this question we should define what is exactly the state value function. Assuming that we are using a discount factor $\gamma$ and we are working with an infinite horizon the state value function is:

\begin{equation} V^{\pi}(s) = \mathbb{E}[\sum\limits_{t \geq 0} \gamma^t R(s_t, \pi(s_t)) |\; s_0 = s; \pi] \end{equation}

You know I didn’t major in math? Ok, let me explain to you what it means. It simply means that the state value function in the state $s$ is the sum of all the rewards (dicounted by $\gamma$) we will receive until we reach a terminal state (in our example either an hole or the goal state) if our agent abide by the policy $\pi$. As we can have various trajectories starting from state $s$ and ending in a terminal state, we need to take the expectation. Let’s work on an example. If we assume our agent is in state $13$ and he follows the policy $\pi$ such that $\pi(s=13) = 2$ (tell our agent to go RIGHT in state $13$) then our agent trajectories will start with either:

  • $13 \rightarrow 14 \rightarrow$ …
  • $13 \rightarrow 9 \rightarrow$ …
  • $13 \rightarrow 12 \rightarrow$ …

because in our example the environment is stochastic in such a way that if we tell our agent to go RIGHT he can either go RIGHT, DOWN or UP. If we sample all the possible trajectories starting from a state $s$ and if we take the average of the sum of all the discounted rewards for all the trajectories then we obtain $V^{\pi}(s)$.

So let’s go back on track… The question was: How can I recover the state value function? Now that we know what is a state value function we can prove that, for a stationary policy $\pi$, the state value function at a state $s \in S$ satisfies the Bellman equation:

\begin{equation} V^{\pi}(s) = R(s, \pi(s)) + \gamma \sum\limits_{s’ \in S} P(s’|\; s, \pi(s)) V^{\pi}(s’) \end{equation}

This relationship comes from the previous definition. Because we can write:

\[\begin{align*} V^{\pi}(s) &\stackrel{1}{=} \mathbb{E}[\sum\limits_{t \geq 0} \gamma^t R(s_t, \pi(s_t)) |\; s_0 = s; \pi] \\\\ &\stackrel{2}{=} R(s, \pi(s)) + \mathbb{E}[\sum\limits_{t \geq 1} \gamma^t R(s_t, \pi(s_t)) |\; s_0 = s; \pi] \\\\ &\stackrel{3}{=} R(s, \pi(s)) + \gamma \sum\limits_{s' \in S} \mathbb{P}(s_1 = s' |\; s_0 = s; \pi(s_0)) \mathbb{E}[\sum\limits_{t \geq 1} \gamma^{t-1} R(s_t, \pi(s_t)) |\; s_1 = s'; \pi] \\\\ &\stackrel{4}{=} R(s, \pi(s)) + \gamma \sum\limits_{s' \in S} P(s' |\; s, \pi(s)) V^{\pi}(s') \end{align*}\]

Equality $1$ comes from the definition. Equality $2$ comes from the fact that $R(s, a)$ is the average reward we get if we choose action $a$ in state $s$, so we can move it out of the expectation. Equality $3$ comes from the fact that we can end in any of the next state $s’$ with probability $P(s’ |\; s, a)$. Equality $4$ use a variable change $u = t-1$ and the definition again.

I think we forgot the main objective… So let me refresh your memory. We wanted to evaluate a policy $\pi$, that is to say we want to compute $V^{\pi}(s)$, $\forall s \in S$. To do that we can use equation (3). We just need to put $V^{\pi}(s)$ on one side of the equation. To do that we will first rewrite (3) using matrix notations. We can notice that $V^{\pi}$ and $R^{\pi}$ are vectors of dimension number of states (in the FrozenLake-v0 environment it is 16). Hence we have:

  • $V^{\pi} = \begin{pmatrix} V^{\pi}(0) & V^{\pi}(1) & \dots & V^{\pi}(N) \end{pmatrix}$
  • $R^{\pi} = \begin{pmatrix} R(0, \pi(0)) & R(1, \pi(1)) & \dots & R(N, \pi(N)) \end{pmatrix}$

where $N = card(S)$ is the number of states in the set $S$.

Moreover $P^{\pi}$ can be seen as a matrix of dimension number of states $\times$ number of states:

\[P^{\pi} = \begin{pmatrix} P(0 |\; 0, \pi(0)) & P(1 |\; 0, \pi(0)) & \dots & P(N |\; 0, \pi(0)) \\ \vdots & \vdots & \ddots & \vdots \\ P(0 |\; N, \pi(N)) & P(1 |\; N, \pi(N)) & \dots & P(N |\; N, \pi(N)) \\ \end{pmatrix}\]

where $P(s’ |\; s, \pi(s)) = 0$ if the state $s’$ cannot be reach from the state $s$.

Finally from basic linear algebra we know that $(Ax)_{i} = \sum\limits_{j=0}^M A_{ij} x_j$ but $\sum\limits_{s’ \in S} P(s’|\; s, \pi(s)) V^{\pi}(s’)$ can be written as:

\[\sum\limits_{s' \in S} P(s'|\; s, \pi(s)) V^{\pi}(s') \stackrel{1}{=} \sum\limits_{s' \in S} P^{\pi}_{ss'} V^{\pi}_{s'} \stackrel{2}{=} (P^{\pi}V^{\pi})_{s}\]

where equality $1$ comes from the definition of $P^{\pi}$ and equality $2$ comes from the basic algebra reminder. Finally we can rewrite equation (3) using matrix notations as:

\begin{equation} V^{\pi} = R^{\pi} + \gamma P^{\pi}V^{\pi} \end{equation}

So we recover $V^{\pi}$ directly by computing:

\begin{equation} V^{\pi} = (I - \gamma P^{\pi})^{-1}R^{\pi} \end{equation}

Note: the matrix $(I - \gamma P^{\pi})$ is guaranteed to be invertible because $P^{\pi}$ is a stochastic matrix, so all its eigenvalues are below 1 and thus the eigenvalues of $(I - \gamma P^{\pi})$ are bounded by $(1-\gamma)$.

We can then implement our policy_evaluation function using equation (5). We just need to create $P^{\pi}$ and $R^{\pi}$:

def policy_evaluation(pi, P, R, gamma, n_states):
    p = np.zeros((n_states, n_states))
    r = np.zeros((n_states, 1))
    
    for s in range(n_states):
        r[s] = R[s, pi[s]]
        p[s, :] = P[s, pi[s], :]
    
    # we take [:, 0] to return a vector because otherwise we have
    # a matrix of size (# states, 1)
    return np.linalg.inv((np.eye(n_states) - gamma * p)).dot(r)[:, 0]

Once we have evaluated our policy, the next step is to improve it. As I mentioned it earlier, the policy improvement step is well explained in the chapter (4.2) of the Sutton book. We just need to use equation (4.9) from the book, rewritten below:

\[\begin{align} \begin{split} \pi '(s) &= \arg\max\limits_{a} \sum\limits_{s', r} P(s' |\; s, a)[R + \gamma V^{\pi}(s')] \\\\ &= \arg\max\limits_{a} \Big( R(s, a) + \gamma \sum\limits_{s', r} P(s'|\; s, a) V^{\pi}(s') \Big) \end{split} \end{align}\]

Using the previous equation, we can implement the policy iteration algorithm:

def policy_iteration(env, epsilon, gamma, max_iter=10000):
    n_states, n_actions = env.observation_space.n, env.action_space.n
    
    # initialize arbitrary value function
    V = np.zeros(n_states)
    
    # initialize arbitrary policy
    pi = np.ones(n_states, dtype=int)
    
    R = getReward(env)
    P = getProb(env)
    
    i = 0
    
    while True and i < max_iter:
        V_prev = V.copy()
        
        # evaluate the policy
        V = policy_evaluation(pi, P, R, gamma, n_states)
        
        # policy improvement
        for s in range(n_states):
            pi[s] = np.argmax(R[s,:] + gamma * P[s, :, :].dot(V)) 
        
        if np.linalg.norm(V_prev - V) < epsilon:
            print("Policy iteration converged after ", i+1, "epochs")
            break
        
        i += 1
    
    return V, pi

The policy iteration algorithm outputs both $V$, the state value function and $\pi$ the optimal policy found. In the same fashion as in the previous article, we can define utility functions to print these 2 outputs in a more friendly-way. We can define a print_value function to display the state value function:

def print_value(V, width=4, height=4):
    return np.around(np.resize(V, (width, height)), 4)

as well as the print_policy function to display the policy found:

# let's plot the policy matrix (as in Part 1). according to
# https://github.com/openai/gym/blob/master/gym/envs/toy_text/frozen_lake.py
# LEFT = 0   DOWN = 1   RIGHT = 2  UP = 3
def print_policy(V, width=4, height=4):
    table = {0: "←", 1: "↓", 2: "→", 3: "↑"}
    policy = np.resize(V, (width, height))
    
    # transform using the dictionary
    return np.vectorize(table.get)(policy)

The final policy returned by our print_policy is:

[['↓', '↑', '→', '↑'],
['←', '←', '←', '←'],
['↑', '↓', '←', '←'],
['←', '→', '↓', '←']]

This policy looks like the one we have found with the Qlearning algorithm. Actually we can see that in state $8$ (thrid row, first column), our agent will prefer to go UP instead of going RIGHT. It is understandable becausue by going RIGHT our agent has a probability of $1/3$ to actually go DOWN and end up in the an hole.

I presented here one way to evaluate the policy exactly by solving a linear algebric equation. They are many other ways to evaluate a policy. For example we can use:

  • an iterative policy evaluation
  • a Monte-Carlo simulation
  • a Temporal-difference (TD) algorithm

We will implement some of these algorithms in a next article. But now we will focus on the other algorithm I’ve talked about in the introduction: the value iteration algorithm.

Value iteration

Contrary to the policy iteration algorithm that improves the policy at each iteration, the value iteration algorithm, as its name suggests it, improve the state value function at each iteration.

The policy iteration algorithm has one drawback. It needs to evaluate the policy at each iteration. As we have seen previously this involves solving a linear algebric equation which is costly because the Gauss-Jordan elimination algorithm has a complexity of $\mathcal{O}(N^3)$. The value iteration algorithm use the same update rule as the policy iteration algorithm. The only difference is that we are updating the value function instead of updating the policy which means that instead of taking the $\arg\max$ in equation (6). We just need to take the maximum:

\[\begin{align} \begin{split} \pi '(s) &= \max\limits_{a} \sum\limits_{s', r} P(s' |\; s, a)[R + \gamma V^{\pi}(s')] \\\\ &= \max\limits_{a} \Big( R(s, a) + \gamma \sum\limits_{s', r} P(s'|\; s, a) V^{\pi}(s') \Big) \end{split} \end{align}\]

To ensure that the algorithm terminates, we stop iterating when the state value function between 2 successive iterations doesn’t change a lot. Hence the value iteration algorithm can be implemented in python as follow:

def valueIteration(env, epsilon, gamma, max_iter=10000):
    n_states, n_actions = env.observation_space.n, env.action_space.n
    
    # initialize utilities to 0
    V = np.zeros(n_states)
    
    R = getReward(env)
    P = getProb(env)
    
    i = 0
    while True and i < max_iter:
        i += 1
        prev_V = V.copy()
        for s in range(n_states):
            V[s] = max(R[s,:] + gamma * P[s, :, :].dot(V))

        if np.linalg.norm(prev_V - V) <= epsilon:
            print("Value iteration converged after ", i+1, "epochs")
            break
    
    return V

The problem with the value iteration algorithm is that it only returns the state value function. While we are interested in the policy. We thus need to define another function to transform the value function into a policy. But we already know how to do this! We just need to reuse equation (6). Here I choose to implement it in python using the first equality of equation (6) to show you another way of computing the policy without using the average reward $R(s,a)$ directly. The code is:

# transform value function into a policy
def value_to_policy(env, gamma, V):
    n_states, n_actions = env.observation_space.n, env.action_space.n
    
    policy = np.zeros(n_states, dtype=int)
    for state in range(n_states):
        best_action = 0
        best_reward = -float("inf")
        for action in range(n_actions):
            moves = env.env.P[state][action] # [(prob, next_state, reward, terminate), ...]
            avg_reward = sum([prob * reward + gamma * V[next_state] for (prob, next_state, reward, _) in moves])
            
            if avg_reward > best_reward:
                best_reward = avg_reward
                best_action = action
        
        policy[state] = best_action
    
    return policy

To retrieve the optimal policy found via value iteration we just need to write:

V = valueIteration(env, 1e-8, 0.8) # epsilon = 1e-8, gamma = 0.8
pol = value_to_policy(env, 0.8, V)
print_policy(pol)

and we obtain the following policy:

[['↓', '↑', '→', '↑'],
['←', '←', '←', '←'],
['↑', '↓', '←', '←'],
['←', '→', '↓', '←']]

This is exactly the same policy we recovered earlier with the policy iteration algorithm.

Policy vs Value iteration

In our example the policy iteration algorithm converged faster than the value iterarion algorithm (4 iterations vs 46 iterations). But as we’ve already mentioned, each step of the policy iteration algorithm is very costly because it involves either using an iterative algorithm or solving a linear equation. So we should rather compare the algorithms by printing their execution times. Here, we have:

  • value iteration: 100 loops, best of 3: 7.41 ms per loop
  • policy iteration: 100 loops, best of 3: 1.38 ms per loop

which makes sense because the matrix involved are not big, so the policy evaluation step is not that costly.

Conclusion

In this tutorial we have learned a lot. We firstly introduced the notion of MDP (Markov Decision Process). We then have learned what a state value function is before specifying more accurately what is a policy. Finally, we have implemented the value iteration and policy iteration algorithms. We have seen that both algorithms give us the same results in our environment and that the policy iteration algorithm converges in a smaller number of iterations. Nonetheless that doesn’t mean that the policy iteration algorithm is faster than the value iteration algorithm. In the next articles we will present some other well-known reinforcement learning algorithms before turning to more recent algorithms that use neural-network.