On many factual questions, people hold beliefs that are biased and polarized in systematic ways. One potential explanation is that when people receive new information, they engage in motivated reasoning by distorting their inference in the direction of beliefs that they are more motivated to hold. This paper develops a model of motivated reasoning and tests its predictions using a large online experiment in the United States. Identifying motivated reasoning from Bayesian updating has posed a challenge in environments where people have preconceived beliefs. I create a new design that overcomes this challenge by analyzing how subjects assess the veracity of information sources that tell them that the median of their belief distribution is too high or too low. In this environment, a Bayesian would infer nothing about the source veracity, but motivated reasoning predicts directional distortions. I reject Bayesian updating in favor of politically-driven motivated reasoning on eight of nine hypothesized topics: immigration, income mobility, racial discrimination, crime, gender-based math ability, climate change, gun laws, and the performance of other subjects. Subjects also engage in motivated reasoning about their own performance. Motivated reasoning from these messages leads people's beliefs to become more polarized and less accurate.