RL Notes 3: Exploration vs Exploitation

- 16 mins

Previous week’s notes

(You can find the notebook containing the code here)

Until now, we used environments were had perfect information: we knew the rewards and transition probabilities. In those contexts, we could use policy or value iteration to find our optimal policy.

Of course, the interesting cases are when we don’t have any information whatsoever. We don’t know the transition probabilities, and we may not even know the rewards.

Let’s briefly review our setting. We’re playing a game. There are certain states we can be in, and actions we can take. We don’t know the result of an action. Sometimes we get a reward. We don’t know when this happens, but we want more of it.

How do we solve situations like this? We solve them by repeatedly simulating the environment. We use our results estimate the unknown quantities (the fancy name for this technique is Monte Carlo methods).

I discuss some concrete strategies below, but one theme will be common to them all:

The exploration-exploitation dilemma

At any point we have two options available to us: we can explore the possibilities we have, or we can exploit the policy we judged so far to be the best. Most strategies will start with exploration in the beginning so that we can gather information. As we do so, we gradually shift towards exploitation of the strategy we judged to be the best.

If we keep on exploring for too long, we waste valuable opportunities to collect rewards.

If we exploit too early, we may be locked into a sub-optimal solution – had we gathered more information, we could have discovered a better policy.

Strategies illustrated here

In this post, I will discuss the following, very common policies:

The Q Function

Many policies will refer in one way or another to the Q function. This function maps state-action combinations to average returns. It is basically a big lookup table.

The Environment: Cheating Roulette

Openai’s simple roulette

jpeg

Openai contains a simplified roulette environment. The rules are as follows:

Simple math will tell you that this roulette has a negative expected value (just as in real life). The best action the agent can take is simply leave the table (again, as in real life).

Cheating roulette modification

That’s not a very interesting setting however. So we will modfiy this roulette. In particular, we will change it so that

The effect of this change is that playing 0 has now positive expected value. However, it is extremely difficult to find out about this feature. After all, even with the modification, 0 will only be rolled once in 30 rounds (on average).

Moreover, not knowing the 0 cheat (playing policies randomly), the expected value is still negative. That makes leaving the table still attractive, although in this case, suboptimal.

Basically, I have set up an environment where it is easy for an agent to get trapped in a sub-optimal solution (leaving the table).

The Simulations

I will consider to be a game of roulette to be 100 rounds. The aim is to maximize the money an agent makes in these 100 rounds.

Each strategy will play 10 000 games (each 100 rounds). After each game, the agent will be able to update its policy. (Why not update after each round? Because that would increase computation time, as some strategies rely on the entire history of the game.)

The Policies

Basic Benchmark Policies

Before we discuss the strategies listed above, let’s have some useful benchmarks.

The first option is to always leave the table; never play. Obviously, that will have a return of 0.

class DontPlay(Policy):      
    def pi(self):  return 37  # This action is leaving the table

More interestintly, what if we play in ‘God mode’, knowing the 0 cheat? In that case, we will earn, on average, about $22 per game (per 100 rounds).

Below is a plot of the money we earn in each simulated round. Note that even knowing the cheat, in most games we still loose money, as we need 4 zeros per game to get a positive profit. Our positive average is due to the few games where zeros came up many times.

class OptimalPlay(Policy):
    def pi(self):  return 0

png

Just to check, what happens if we always select a random number? The return is somewhat negative: we loose money on average, about $1 per game (per 100 rounds).

class RandPolicy(Policy):
    def pi(self):  return self.env.action_space.sample()

png

Greedy Policy

Our simplest policy is a greedy policy. A greedy policy simply plays the number with the highest Q value (the highest historical return). Generally, it’s not a good choice, as it may lock into a sub-optimal solution. (There is a chance that the greedy policy will find the optimal solution, just not a reliably big one.)

Benefits

Drawbacks

class Greedy(Policy):
    def pi(self):  return self.getMaxQ()   

png

As we can see, for the first couple of rounds our policy followed some random choices, as some numbers, just by chance, got positive rewards. However, very quickly our policy concludes that it’s best to leave the game - and earns 0 rewards afterward.

Below I plot the actions taken in the first and last 1000 rounds.

png

Explore First Then Greedy

Since our greedy policy had the drawback of not doing enough exploration, let’s add explicit exploration rounds.

We will let our policy follow random actions for the first k games. After k games, it will follow a greedy policy.

Benefits

Drawbacks

class ExpFirst(Policy):
    def __init__(self, exp_num):
        super().__init__()
        self.exp_num = exp_num    
    def pi(self):
        if self.episode <= self.exp_num:  
            return self.env.action_space.sample()
        else:
            return self.getMaxQ()         

In this example, we explore for the first 1000 games. In this particular case, it managed to find the best policy afterwards. Note however that there is still luck involved, and there is no guarantee that if we replay the same policy it will find the optimum again (it depends on how often 0 land in the first 1000 games).

png

The actions taken in the first and last 1000 rounds:

png

Similar results with 2000 games of exploration.

png

Another pitfall: exploring too long. Here we explore for 5000 rounds (half the time) before exploiting. This means we are more likely going to find the optimal policy, but we reduce our exploitation opportunities: our average profit is little more than half than in the first case.

png

Epsilon-Greedy

Another way to modify the pure greedy strategy is to add some noise to it. Specifically, follow the greedy strategy with probability , and do a random action with probability .

Over time, we want to decrease , so that we gradually shift from exploration to exploitation. One way to parameterize:

where n is the number of games we played so far.

Benefits

Drawbacks

class EpsGreedy(Policy):
    
    def __init__(self, k):
        super().__init__()
        self.k = k
    
    def pi(self):
        epsilon = self.k / max(self.episode, 1)
        if np.random.rand() <= epsilon:
            return self.env.action_space.sample()
        else:
            return self.getMaxQ()

png

png

A demonstration of the luck involved: even though we explore twice as long as in the previous case, we were unlucky and didn’t find the optimal solution.

png

png

More exploration. We find the optimum, but reduced our profit by exploring too long.

png

png

Optimistic Initialization

This is not so much a stand-alone strategy, as it is a hack on our greedy policy. We set the initial Q values to be unrealistically high, then follow a greedy strategy. The effect of this hack is that our policy will try out all available actions first, as our initialized Q values are higher than what the policy can achieve.

While strictly better than a pure greedy strategy, it suffers from all its drawbacks.

Note however that this hack is available for almost any policy, not just greedy. If we want to explore all actions before committing to our policy, we initialize all Q’s to be unrealistically high. If the number of possible actions isn’t too high, this is a reasonable thing to do.

class optInitPolicy(Policy):
    def __init__(self):
        super().__init__()
        self.Q = 100*np.ones((self.nS, self.nA))
    
    def pi(self):  return self.getMaxQ()

In this case, the hack didn’t make a difference relative to the greedy policy.

png

png

Optimism in the Face of Uncertainty

Our policies so far haven’t implemented a clever way to explore our action space. They just followed random actions.

We can do better. To give some intuition, consider the following two action histories:

The average return is the same for both cases (-0.33). However, we’re a lot more confident that number 8 has a negative reward, since we played it 100 times more. Number 7 could have just had a few unlucky flips.

Thus, we will favor uncertain actions. In the above example, we favor 7. If 7 is indeed a bad number, successive trials will prove so.

We select a p such that an action not chosen is at most p percent likely to be the optimal one. We then use Hoeffding’s Inequality to choose our action according to

where H is a measure of uncertainty. In the case of bernoulli random variables, it would be equal to

where is the frequency of the given state-action pair.

In our case, the formula doesn’t quite work, as 0 isn’t a [0, 1] (or [-1, 1]) variable. Our policy will reduce the uncertainty in the probability that a number appears, but it won’t reduce the uncertainty in the expected value (which is what we care about).

Usually, this strategy is not a bad approach (unless the payoff distribution is extremely skewed, as in this case).

Benefits

Drawbacks

class optUnc(Policy):
    def __init__(self, p = 0.05):
        super().__init__()
        self.H = np.ones(self.Q.shape)
        self.H[0, 37] = 0
        self.p = p
        
    def pi(self):
        Q = self.Q[self.state, :]
        H = self.H[self.state, :]
        return np.argmax(Q+H)
        
    def updateH(self):
        for s in range(self.nS):
            for a in range(self.nA):
                if (not self.returns[(s, a)] == []) & (a != 37):
                    n = len(self.returns[(s, a)])
                    self.H[s, a] = np.sqrt(-np.log(self.p)/(2 * n))
             
    def callOnEnd(self):  self.updateH()

Voila: after only few rounds of exploring, we find the best policy.

png

Thompson Sampling

Here is another way to intelligently explore: let’s sample actions by the probability that they are optimal. This way, actions that had good historical returns will be favored, but we will also occasionally sample actions that didn’t have good returns.

Specifically, we will assume a prior distribution for the expected payoff for each number. In this example, we use a Beta distribution with parameters (1, 1). Then, as we collect rewards, we update these parameters.

In each round, we take a sample from the distribution of each action. We choose the action with the highest sampled value.

Benefits

Drawbacks

class ThompSamp(Policy):
    def __init__(self):
        super().__init__()
        self.alphas = np.ones(self.Q.shape)
        self.betas = np.ones(self.Q.shape)
        
    def pi(self):
        
        s = self.state
        n = self.alphas[s, :].shape[0]
        sampl = np.ones(n)/2
        for i in range(n - 1):
            sampl[i] = np.random.beta(self.alphas[s, i], self.betas[s, i])
        self.current_action = np.argmax(sampl)
        return self.current_action
             
    def callOnEnd(self):
        for state, action, G in self.states_actions_returns:
            if G > 0:
                self.alphas[state, action] += G
            elif self.reward < 0:
                self.betas[state, action] += -G

This graph is interesting. Only towards the second half does the agent swing to the optimal policy. The likely reason: until then zeros didn’t have a good track record (remember, they only come up once in 30 turns on average). Not having a good track record made it less likely that they got selected.

However, by (weighted) random sampling, every now and then we did select a zero. And one success had a major update on distribution weights. Zeros got selected more and more often, and a positive cascade got built.

png

png

Krisztian Kovacs

Krisztian Kovacs

Learning to teach computers to learn.

comments powered by Disqus
rss facebook twitter github gitlab youtube mail spotify lastfm instagram linkedin google google-plus pinterest medium vimeo stackoverflow reddit quora quora