We introduce a model in which agents observe signals about the state of the world, some of which are open to interpretation. Our decision makers use Bayes’ rule in an iterative way: ﬁrst to interpret each signal and then to form a posterior on the se-quence of interpreted signals. This ‘double updating’ leads to conﬁrmation bias and can lead agents who observe the same information to polarize: the distance between their beliefs can grow after observing a common sequence of signals. Such updating is approximately optimal if agents must interpret ambiguous signals and suﬃciently discount the future. If they are very patient but can only store interpretations of ambiguous signals, then a time-varying random interpretation rule (still double-updating) is approximately optimal. In a continuous (normally distributed) version of the model, we show that posterior beliefs never lose the inﬂuence of the prior and still always converge, but always converge to something that is inﬂuenced by the prior and early signals and so is wrong with probability one. Beliefs become arbitrarily accurate as the signal accuracy increases, but are always biased. We explore the model in an on-line experiment in which individuals interpret research summaries about climate change and the death penalty and report beliefs. Consistent with the model, not only is there a signiﬁcant relationship between an individual’s prior and their interpretation of the summaries; but more than half of the subjects exhibit polarizing behavior-shifting their beliefs further from the average belief after seeing the same summaries as all other subjects.