Projects

COLLECTIVE INTELLIGENCE

 

  • Feng Shi*, Misha Teplitskiy*, Eamon Duede, and James Evans. "Wisdom of Polarized Crowds." Under review. (Link to preprint). ( *equal authors)

    • As political polarization in the United States continues to rise, the question of whether polarized individuals can fruitfully cooperate becomes pressing. Although diversity of individual perspectives typically leads to superior team performance on complex tasks, strong political perspectives have been associated with conflict, misinformation and a reluctance to engage with people and perspectives beyond one's echo chamber. It remains is unclear whether self-selected teams of ideologically diverse individuals will create higher or lower quality outcomes. In this paper, we explore the effect of team ideological composition on performance through analysis of millions of edits to Wikipedia's Political, Social Issues, and Science articles. We measure editors' online ideological preferences by how much they choose to contribute to conservative versus liberal articles. Two surveys of editors suggest that these online preferences are associated with offline (a) political party affiliation and (b) ideological self-identity. Our analysis then reveals that polarized teams---those consisting of a balanced set of ideologically diverse editors---create articles of higher quality than ideologically homogeneous teams. The effect appears most strongly in Wikipedia's Political articles, but is also observed in Social Issues and even Science articles. Analysis of article "talk pages" reveals that ideologically polarized teams engage in longer, more constructive, competitive, and substantively focused but linguistically diverse debates than ideological moderates. More intense use of Wikipedia policies by ideologically diverse teams suggests institutional design principles to help unleash the power of polarization.

 

EVALUATION OF SCIENTIFIC IDEAS, SOCIOLOGY OF KNOWLEDGE

 

  • Misha Teplitskiy, Hardeep Ranu, Gary Gray, Eva Guinan, Karim Lakhani. "Gender, Status, and Willingness to be Wrong: A Field Experiment in Scientific Peer Review." (Manuscript in preparation.)

    • Many organizations rely on panels of experts to evaluate new ideas and investment opportunities. However, it is unclear how to best aggregate multiple individual judgments. Although normative models of decision-making suggest that information exchange among individuals improves judgment quality, it is unknown whether and under what conditions experts actually utilize information from one another in practice. Here, we report an experiment that measures information utilization by 277 faculty members at US medical schools who reviewed 47 project proposals in biomedicine. After completing their reviews independently, we exposed reviewers to artificial scores attributed to “other reviewers.” These scores were randomly generated to be above or below that of the reviewer and were described as coming from reviewers in the same or different discipline. We found that exposure to the artificial scores lead reviewers to revise their original score in 47% of cases. Contrary to normative models, reviewers were insensitive to the disciplinary expertise of the stimulus. Much more important were reviewers’ identities: men updated their scores 12% less often than women and high-status reviewers, those with particularly high h-indices, updated 24% less often than others. Lastly, low scores were particularly "sticky," while high scores were updated more than 60% of the time. The experiment strongly suggests that information aggregation among experts does not depend only on the quality of the information, and invites those relying on experts to consider not only the composition of panels but also the process by which panels arrive at judgments.

 

  • Misha Teplitskiy, Daniel Acuna, Aida Raoult, Konrad Kording, James Evans. 2018. "The sociology of scientific validity: How professional networks shape judgment in peer review." Research Policy (Link to article)

    • Scientific journals often rely on the judgments of external reviewers, but reviewers may be biased towards authors to whom they are personally connected. Although such biases have been observed in prospective judgments of (uncertain) future performance, it is unknown whether such biases occur in assessments of already completed work, and if so, why. This study presents evidence that personal connections between authors and reviewers of neuroscience research are associated with biased decisions and explores the mechanisms driving the effect. Using the reviews of 7,981 neuroscience manuscripts submitted to the journal PLOS ONE, which evaluates manuscripts only on whether they are scientifically valid, we find that reviewers favored authors close in the co-authorship network by ~0.11 points (1.0 – 4.0 scale) for each step of proximity. PLOS ONE’s validity-focused review and the substantial amount of favoritism shown by distant vs. very distant reviewers, both of whom should have little to gain from nepotism, point to the central role of substantive disagreements between scientists in different “schools of thought.” The findings suggest that removing bias from peer review cannot be accomplished simply by recusing the closest-connected reviewers, and highlights the value of recruiting reviewers embedded in diverse professional networks.

 

  • Tod S. Van Gunten, John Levi Martin, Misha Teplitskiy. 2016. "Consensus, Polarization, and Alignment in the Economics Profession." Sociolgoical Science. (Link to article)

    • Scholars interested in the political influence of the economics profession debate whether the discipline is unified by policy consensus or divided among competing schools or factions. We address this question by reanalyzing a unique recent survey of elite economists. We present a theoretical framework based on a formal sociological approach to the structure of belief systems and propose alignment, rather than consensus or polarization, as a model for the structure of belief in the economics profession. Moreover, we argue that social clustering in a heterogeneous network topology is a better model for disciplinary social structure than discrete factionalization. Results show that there is a robust latent ideological dimension related to economists’ departmental affiliations and political partisanship. Furthermore, we show that economists closer to one another in informal social networks also share more similar ideologies.

 

  • Daniel Acuna, Misha Teplitskiy, James Evans, Konrad Kording(Under review). "Should journals allow authors to suggest reviewers?"

 

  • Misha Teplitskiy, Julianna St. Onge, and James Evans. (Under review.) "How Firm is Sociological Knowledge? Reanalysis of GSS findings with alternative models and out-of-sample data, 1972-2016."

    • Published findings may be fragile because hypotheses were tailored to fit the data and knowledge about insignificant relationships - "negative knowledge" - remains unreported, or because the world has changed and once robust relationships no longer hold. We reanalyze findings from hundreds of articles that use the General Social Survey, 1972-2012, estimating (1) published models and alternative specifications on in-sample data, and (2) published models on future waves of the GSS. In both, number of significant coefficients, standardized coefficient sizes, and R2 are significantly reduced. Our findings suggest that social scientists are engaged in only moderate data mining, but that they could benefit from more; a bigger concern is the relevance of older published knowledge to the contemporary world.

 

  • Misha Teplitskiy. 2015. "Frame Search and Re-search: How Quantitative Sociological Articles Change During Peer Review." The American Sociologist. (Link to article).

    • Peer review is a central institution in academic publishing, yet its processes and effects on research remain opaque. Empirical studies have (1) been rare because data on the peer review process are generally unavailable, and (2) conceptualized peer review as gate-keepers who either accept or reject a manuscript, overlooking peer review's role in constructing articles. This study uses a unique data resource to study how sociological manuscripts change during peer review. Authors of published sociological research often present earlier versions of that research at annual meetings of the American Sociological Association (ASA). Many of these annual meetings papers are publicly available online and tend to be uploaded before undergoing formal peer review. A data sample is constructed by linking these papers to the respective versions published between 2006 and 2012 in two peer-reviewed journals, American Sociological Review and Social Forces. Quantitative and qualitative analyses examine changes across article versions, paying special attention to how elements of data analysis and theory in the ASA versions change. Results show that manuscripts tend to change more substantially in their theoretical framing than in the data analyses. The finding suggests that a chief effect of peer review in quantitative sociology is to prompt authors to adjust their theoretical framing, a mode or review I call "data-driven." The data-driven mode of review problematizes the vision of sociological research as addressing theoretically motivated questions.

 

  • Misha Teplitskiy and Von Bakanic. 2016. "Do Peer Reviewers Predict Impact?: Evidence from the American Sociological Review, 1977-1982." Socius. (Link to article).

    • Peer review is the premier method of evaluation in academic publishing, but the validity of reviewers' and editors' judgments has long been questioned. Here we investigate how well peer reviews predict. Most previous studies have lacked an external measure of validity and, consequently, compared reviewers' judgment only to each other. These studies find that reviewers disagree frequently, and some have interpreted the disagreement as confirming the common suspicion that reviewers base their judgments on idiosyncratic preferences and allegiances. Reviewers may also disagree about the quality of a manuscript for other reasons, including because the manuscript's quality is on the cusp between acceptability and rejection. Previous studies could not distinguish between the several interpretations of reviewer disagreement and, consequently, the validity of peer review decisions has remained unclear. To rectify this problem, we use historical peer review data from the journal American Sociological Review and compare editorial judgments to an external validity measure - citations. Results indicate that, in the short-term, consensus-accept articles do not substantially outperform those over which reviewers disagreed. However, differences become manifest in the long-term citation trajectories, as consensus-accept articles outperform all others. This finding challenges the common view that peer review decisions are valid only in the short-term and that long-term scientific trajectories are unpredictable.

 

DIFFUSION OF SCIENTIFIC KNOWLEDGE

 

  • Misha Teplitskiy, Grace Lu, Eamon Duede. 2016. "Amplifying the Impact of Open Access: Wikipedia and the Diffusion of Science.Journal of the Association for Information Science and Technology. (Link to article).

    • With the rise of Wikipedia as a first-stop source for scientific knowledge, it is important to compare its representation of that knowledge to that of the academic literature. Here we identify the 250 most heavily used journals in each of 26 research fields (4,721 journals, 19.4M articles in total) indexed by the Scopus database, and test whether topic, academic status, and accessibility make articles from these journals more or less likely to be referenced on Wikipedia. We find that a journal's academic status (impact fac-tor) and accessibility (open access policy) both strongly increase the probability of its being referenced on Wikipedia. Controlling for field and impact factor, the odds that an open access journal is referenced on the English Wikipedia are 47% higher compared to paywall journals. One of the implications of this study is that a major consequence of open access policies is to significantly amplify the diffusion of sci-ence, through an intermediary like Wikipedia, to a broad audience.