Projects

COLLECTIVE INTELLIGENCE

 

  • Misha Teplitskiy, Hardeep Ranu, Gary Gray, Eva Guinan, Karim Lakhani. "Do Experts Listen to Other Experts? A Field Experiment in Scientific Peer Review." (Manuscript in preparation.)

    • Many organizations rely on experts to evaluate ideas, investment opportunities, and so on. However, when the objects to be evaluated are complex and require the opinions of multiple experts, it is unclear whether experts should provide evaluations independently or collaboratively. Although normative models of decision-making suggest that information exchange among individuals improves judgments, it is unknown whether and under what conditions experts actually utilize information from one another. Here, we report an experiment that measures information utilization among 277 expert reviewers of 47 applications for multidisciplinary grant funding.  In particular, we measure whether reviewers do or do not update how they score applications after observing the scores of (fabricated) “other reviewers.” The scores of “other reviewers” were randomly generated and their discipline was (randomly) described as being same or different to that of the reviewer. We found that reviewers updated scores in 47% of cases after exposure to the fabricated stimuli. Contrary to normative models, reviewers were insensitive to the disciplinary expertise of the stimulus. Much more important was the reviewer’s own identity: female reviewers updated their scores 12% more often than males, and reviewers from Harvard updated their scores 11% less often than those from other institutions. Lastly, updating was more common for the medium- and high-scoring applications, leading to a 40% turnover in the top-5 proposals before and after exposure to the stimuli. The experiment reveals that insights from behavioral decision-making extend even to tenured faculty at top medical schools, and suggests a new pathway through which bias can enter evaluations - through the gendered openness to external information.

 

  • Feng Shi, Misha Teplitskiy (equal authors), James Evans, Eamon Duede. "Wisdom of Polarized Crowds." Under review. (Link to article). 

    • As political polarization in the United States continues to rise, the question of whether polarized individuals can fruitfully cooperate becomes pressing. Although diversity of individual perspectives typically leads to superior team performance on complex tasks, strong political perspectives have been associated with conflict, misinformation and a reluctance to engage with people and perspectives beyond one's echo chamber. It is unclear whether self-selected teams of politically diverse individuals will create higher or lower quality outcomes. In this paper, we explore the effect of team political composition on performance through analysis of millions of edits to Wikipedia's Political, Social Issues, and Science articles. We measure editors' political alignments by their contributions to conservative versus liberal articles. A survey of editors validates that those who primarily edit liberal articles identify more strongly with the Democratic party and those who edit conservative ones with the Republican party. Our analysis then reveals that polarized teams---those consisting of a balanced set of politically diverse editors---create articles of higher quality than politically homogeneous teams. The effect appears most strongly in Wikipedia's Political articles, but is also observed in Social Issues and even Science articles. Analysis of article "talk pages" reveals that politically polarized teams engage in longer, more constructive, competitive, and substantively focused but linguistically diverse debates than political moderates. More intense use of Wikipedia policies by politically diverse teams suggests institutional design principles to help unleash the power of politically polarized teams.

 

EVALUATION OF SCIENTIFIC IDEAS, SOCIOLOGY OF KNOWLEDGE

 

  • Tod S. Van Gunten, John Levi Martin, Misha Teplitskiy. 2016. "Consensus, Polarization, and Alignment in the Economics Profession." Sociolgoical Science. (Link to article)

    • Scholars interested in the political influence of the economics profession debate whether the discipline is unified by policy consensus or divided among competing schools or factions. We address this question by reanalyzing a unique recent survey of elite economists. We present a theoretical framework based on a formal sociological approach to the structure of belief systems and propose alignment, rather than consensus or polarization, as a model for the structure of belief in the economics profession. Moreover, we argue that social clustering in a heterogeneous network topology is a better model for disciplinary social structure than discrete factionalization. Results show that there is a robust latent ideological dimension related to economists’ departmental affiliations and political partisanship. Furthermore, we show that economists closer to one another in informal social networks also share more similar ideologies.

  • Misha Teplitskiy, Daniel Acuna, Aida Raoult, Konrad Kording, James Evans. (Under review). "The Social Structure of Scientific Consensus." (Link to preprint)

    • Scientific journals often rely on the judgments of external reviewers, but reviewers may be biased towards authors to whom they are personally connected. Although such biases have been observed in prospective judgments of (uncertain) future performance, it is unknown whether such biases occur in assessments of already completed work, and if so, why. This study presents evidence that personal connections between authors and reviewers of neuroscience research are associated with biased decisions and explores the mechanisms driving the effect. Using the reviews of 7,981 neuroscience manuscripts submitted to the journal PLOS ONE, which evaluates manuscripts only on whether they are scientifically valid, we find that reviewers favored authors close in the co-authorship network by ~0.11 points (1.0 – 4.0 scale) for each step of proximity. PLOS ONE’s validity-focused review and the substantial amount of favoritism shown by distant vs. very distant reviewers, both of whom should have little to gain from nepotism, point to the central role of substantive disagreements between scientists in different “schools of thought.” The findings suggest that removing bias from peer review cannot be accomplished simply by recusing the closest-connected reviewers, and highlights the value of recruiting reviewers embedded in diverse professional networks.

 

  • Daniel Acuna, Misha Teplitskiy, James Evans, Konrad Kording. (Under review). "Should journals allow authors to suggest reviewers?"

  • James Evans and Misha Teplitskiy (equal authors). (Under review.) "How Firm is Sociological Knowledge? Reanalysis of GSS findings with alternative models and out-of-sample data, 1972-2016."

    • Published findings may be fragile because hypotheses were tailored to fit the data and knowledge about insignificant relationships - "negative knowledge" - remains unreported, or because the world has changed and once robust relationships no longer hold. We reanalyze findings from hundreds of articles that use the General Social Survey, 1972-2012, estimating (1) published models and alternative specifications on in-sample data, and (2) published models on future waves of the GSS. In both, number of significant coefficients, standardized coefficient sizes, and R2 are significantly reduced. Our findings suggest that social scientists are engaged in only moderate data mining, but that they could benefit from more; a bigger concern is the relevance of older published knowledge to the contemporary world.

 

  • Misha Teplitskiy. 2015. "Frame Search and Re-search: How Quantitative Sociological Articles Change During Peer Review." The American Sociologist. (Link to article).

    • Peer review is a central institution in academic publishing, yet its processes and effects on research remain opaque. Empirical studies have (1) been rare because data on the peer review process are generally unavailable, and (2) conceptualized peer review as gate-keepers who either accept or reject a manuscript, overlooking peer review's role in constructing articles. This study uses a unique data resource to study how sociological manuscripts change during peer review. Authors of published sociological research often present earlier versions of that research at annual meetings of the American Sociological Association (ASA). Many of these annual meetings papers are publicly available online and tend to be uploaded before undergoing formal peer review. A data sample is constructed by linking these papers to the respective versions published between 2006 and 2012 in two peer-reviewed journals, American Sociological Review and Social Forces. Quantitative and qualitative analyses examine changes across article versions, paying special attention to how elements of data analysis and theory in the ASA versions change. Results show that manuscripts tend to change more substantially in their theoretical framing than in the data analyses. The finding suggests that a chief effect of peer review in quantitative sociology is to prompt authors to adjust their theoretical framing, a mode or review I call "data-driven." The data-driven mode of review problematizes the vision of sociological research as addressing theoretically motivated questions.

 

  • Misha Teplitskiy and Von Bakanic. 2016. "Do Peer Reviewers Predict Impact?: Evidence from the American Sociological Review, 1977-1982." Socius. (Link to article).

    • Peer review is the premier method of evaluation in academic publishing, but the validity of reviewers' and editors' judgments has long been questioned. Here we investigate how well peer reviews predict. Most previous studies have lacked an external measure of validity and, consequently, compared reviewers' judgment only to each other. These studies find that reviewers disagree frequently, and some have interpreted the disagreement as confirming the common suspicion that reviewers base their judgments on idiosyncratic preferences and allegiances. Reviewers may also disagree about the quality of a manuscript for other reasons, including because the manuscript's quality is on the cusp between acceptability and rejection. Previous studies could not distinguish between the several interpretations of reviewer disagreement and, consequently, the validity of peer review decisions has remained unclear. To rectify this problem, we use historical peer review data from the journal American Sociological Review and compare editorial judgments to an external validity measure - citations. Results indicate that, in the short-term, consensus-accept articles do not substantially outperform those over which reviewers disagreed. However, differences become manifest in the long-term citation trajectories, as consensus-accept articles outperform all others. This finding challenges the common view that peer review decisions are valid only in the short-term and that long-term scientific trajectories are unpredictable.

 

DIFFUSION OF SCIENTIFIC KNOWLEDGE

 

  • Misha Teplitskiy, Grace Lu, Eamon Duede. 2016. "Amplifying the Impact of Open Access: Wikipedia and the Diffusion of Science." Journal of the Association for Information Science and Technology. (Link to article).

    • With the rise of Wikipedia as a first-stop source for scientific knowledge, it is important to compare its representation of that knowledge to that of the academic literature. Here we identify the 250 most heavily used journals in each of 26 research fields (4,721 journals, 19.4M articles in total) indexed by the Scopus database, and test whether topic, academic status, and accessibility make articles from these journals more or less likely to be referenced on Wikipedia. We find that a journal's academic status (impact fac-tor) and accessibility (open access policy) both strongly increase the probability of its being referenced on Wikipedia. Controlling for field and impact factor, the odds that an open access journal is referenced on the English Wikipedia are 47% higher compared to paywall journals. One of the implications of this study is that a major consequence of open access policies is to significantly amplify the diffusion of sci-ence, through an intermediary like Wikipedia, to a broad audience.