Section: Research Program
Decision-making Under Uncertainty
The phrase “Decision under uncertainty” refers to the problem of taking decisions when we do not have a full knowledge neither of the situation, nor of the consequences of the decisions, as well as when the consequences of decision are non deterministic.
We introduce two specific sub-domains, namely the Markov decision processes which model sequential decision problems, and bandit problems.
Reinforcement Learning
Sequential decision processes occupy the heart of the SequeL project; a detailed presentation of this problem may be found in Puterman's book [61].
A Markov Decision Process (MDP) is defined as the tuple where is the state space, is the action space, is the probabilistic transition kernel, and is the reward function. For the sake of simplicity, we assume in this introduction that the state and action spaces are finite. If the current state (at time ) is and the chosen action is , then the Markov assumption means that the transition probability to a new state (at time ) only depends on . We write the corresponding transition probability. During a transition , a reward is incurred.
In the MDP (, each initial state and action sequence gives rise to a sequence of states , satisfying and rewards (Note that for simplicity, we considered the case of a deterministic reward function, but in many applications, the reward itself is a random variable.) defined by .
The history of the process up to time is defined to be . A policy is a sequence of functions , where maps the space of possible histories at time to the space of probability distributions over the space of actions . To follow a policy means that, in each time step, we assume that the process history up to time is and the probability of selecting an action is equal to . A policy is called stationary (or Markovian) if depends only on the last visited state. In other words, a policy is called stationary if holds for all . A policy is called deterministic if the probability distribution prescribed by the policy for any history is concentrated on a single action. Otherwise it is called a stochastic policy.
We move from an MD process to an MD problem by formulating the goal of the agent, that is what the sought policy has to optimize. It is very often formulated as maximizing (or minimizing), in expectation, some functional of the sequence of future rewards. For example, an usual functional is the infinite-time horizon sum of discounted rewards. For a given (stationary) policy , we define the value function of that policy at a state as the expected sum of discounted future rewards given that we state from the initial state and follow the policy :
where is the expectation operator and is the discount factor. This value function gives an evaluation of the performance of a given policy . Other functionals of the sequence of future rewards may be considered, such as the undiscounted reward (see the stochastic shortest path problems [60]) and average reward settings. Note also that, here, we consider the problem of maximizing a reward functional, but a formulation in terms of minimizing some cost or risk functional would be equivalent.
In order to maximize a given functional in a sequential framework, one usually applies Dynamic Programming (DP) [58], which introduces the optimal value function , defined as the optimal expected sum of rewards when the agent starts from a state . We have . Now, let us give two definitions about policies:
-
We say that a policy is optimal, if it attains the optimal values for any state , i.e., if for all . Under mild conditions, deterministic stationary optimal policies exist [59]. Such an optimal policy is written .
-
We say that a (deterministic stationary) policy is greedy with respect to (w.r.t.) some function (defined on ) if, for all ,
where is the set of that maximizes . For any function , such a greedy policy always exists because is finite.
The goal of Reinforcement Learning (RL), as well as that of dynamic programming, is to design an optimal policy (or a good approximation of it).
The well-known Dynamic Programming equation (also called the Bellman equation) provides a relation between the optimal value function at a state and the optimal value function at the successors states when choosing an optimal action: for all ,
The benefit of introducing this concept of optimal value function relies on the property that, from the optimal value function , it is easy to derive an optimal behavior by choosing the actions according to a policy greedy w.r.t. . Indeed, we have the property that a policy greedy w.r.t. the optimal value function is an optimal policy:
In short, we would like to mention that most of the reinforcement learning methods developed so far are built on one (or both) of the two following approaches ( [64]):
-
Bellman's dynamic programming approach, based on the introduction of the value function. It consists in learning a “good” approximation of the optimal value function, and then using it to derive a greedy policy w.r.t. this approximation. The hope (well justified in several cases) is that the performance of the policy greedy w.r.t. an approximation of will be close to optimality. This approximation issue of the optimal value function is one of the major challenges inherent to the reinforcement learning problem. Approximate dynamic programming addresses the problem of estimating performance bounds (e.g. the loss in performance resulting from using a policy -greedy w.r.t. some approximation - instead of an optimal policy) in terms of the approximation error of the optimal value function by . Approximation theory and Statistical Learning theory provide us with bounds in terms of the number of sample data used to represent the functions, and the capacity and approximation power of the considered function spaces.
-
Pontryagin's maximum principle approach, based on sensitivity analysis of the performance measure w.r.t. some control parameters. This approach, also called direct policy search in the Reinforcement Learning community aims at directly finding a good feedback control law in a parameterized policy space without trying to approximate the value function. The method consists in estimating the so-called policy gradient, i.e. the sensitivity of the performance measure (the value function) w.r.t. some parameters of the current policy. The idea being that an optimal control problem is replaced by a parametric optimization problem in the space of parameterized policies. As such, deriving a policy gradient estimate would lead to performing a stochastic gradient method in order to search for a local optimal parametric policy.
Finally, many extensions of the Markov decision processes exist, among which the Partially Observable MDPs (POMDPs) is the case where the current state does not contain all the necessary information required to decide for sure of the best action.
Multi-arm Bandit Theory
Bandit problems illustrate the fundamental difficulty of decision making in the face of uncertainty: A decision maker must choose between what seems to be the best choice (“exploit”), or to test (“explore”) some alternative, hoping to discover a choice that beats the current best choice.
The classical example of a bandit problem is deciding what treatment to give each patient in a clinical trial when the effectiveness of the treatments are initially unknown and the patients arrive sequentially. These bandit problems became popular with the seminal paper [62], after which they have found applications in diverse fields, such as control, economics, statistics, or learning theory.
Formally, a -armed bandit problem () is specified by real-valued distributions. In each time step a decision maker can select one of the distributions to obtain a sample from it. The samples obtained are considered as rewards. The distributions are initially unknown to the decision maker, whose goal is to maximize the sum of the rewards received, or equivalently, to minimize the regret which is defined as the loss compared to the total payoff that can be achieved given full knowledge of the problem, i.e., when the arm giving the highest expected reward is pulled all the time.
The name “bandit” comes from imagining a gambler playing with slot machines. The gambler can pull the arm of any of the machines, which produces a random payoff as a result: When arm is pulled, the random payoff is drawn from the distribution associated to . Since the payoff distributions are initially unknown, the gambler must use exploratory actions to learn the utility of the individual arms. However, exploration has to be carefully controlled since excessive exploration may lead to unnecessary losses. Hence, to play well, the gambler must carefully balance exploration and exploitation. Auer et al. [57] introduced the algorithm UCB (Upper Confidence Bounds) that follows what is now called the “optimism in the face of uncertainty principle”. Their algorithm works by computing upper confidence bounds for all the arms and then choosing the arm with the highest such bound. They proved that the expected regret of their algorithm increases at most at a logarithmic rate with the number of trials, and that the algorithm achieves the smallest possible regret up to some sub-logarithmic factor (for the considered family of distributions).