We provide a model of statistical inference in a setting where unawareness matters. In this setting, a decision maker forms an assessment regarding some alternatives available, but before making this assessment he can collect information about the environment. More specifically, we assume that the decision maker views data as being generated by an underlying stochastic process that satisfies a condition we denote conditional exchangeability. That is, there is a sequence of random variables (Xt) that represents the realization of repeated trials of an experiment. The sequence (Xt) is said to be conditionally exchangeable if, for every n, (X1, … , Xn, Xn+1) is distributed as (X1, … , Xn, Xn+2). Conditional exchangeability is related to the notion of exchangeability: exchangeable random variables are obviously conditionally exchangeable. There are, however, sequences of random variables that are conditionally exchangeable but not exchangeable. In particular, conditionally exchangeable sequences of random variables may fail to be stationary. At any point t in time, the decision maker makes an assessment regarding bets whose payoffs depend on these realizations. However, the decision maker’s level of awareness restricts his perception of the realized state of the world: he can only partially observe the realizations of the sequence of random variables (Xt). As information trickles, the decision maker discovers new states. Consequently his awareness level increases and his conceivable state space expands. Moreover, as new states are discovered, probability mass is shifted from old, non-null events to the events just created. When facing the choice of how much probability mass to shift, due to the lack of familiarity with the new events, the decision maker updates his assessment by taking into consideration the largest set of probability measures that is consistent with his previous assessment. As a result, newly learned events are initially seen as ambiguous. As evidence accumulates, the ambiguity associated with those events gradually resolves and the assessment made by the decision maker converges to the true conditional probability of those events. Our contribution can thus be summarized by three core features: 1. We provide a model of learning under unawareness that, similarly to Epstein and Schneider [2007], accommodates ambiguous beliefs. 2. We explicitly model the process of inductive reasoning implied by the dynamics of growing awareness described in Karni and Vierø [2013]. 3. Because ambiguity emerges endogenously, we provide a foundation for the unanimity rule preference representation axiomatized in Bewley [2002] and Gilboa, Macheroni, Marinacci, and Schmeidler [2010].

Learning Under Unawareness

Fri 16 Sep 2016 3:30pm5:00pm


103 Colin Clark Bldg