Peer review, schools of thought in science
Tod S. Van Gunten, John Levi Martin, Misha Teplitskiy. 2016. Consensus, Polarization, and Alignment in the Economics Profession. Sociolgoical Science. (Link to article)
Scholars interested in the political influence of the economics profession debate whether the discipline is unified by policy consensus or divided among competing schools or factions. We address this question by reanalyzing a unique recent survey of elite economists. We present a theoretical framework based on a formal sociological approach to the structure of belief systems and propose alignment, rather than consensus or polarization, as a model for the structure of belief in the economics profession. Moreover, we argue that social clustering in a heterogeneous network topology is a better model for disciplinary social structure than discrete factionalization. Results show that there is a robust latent ideological dimension related to economists’ departmental affiliations and political partisanship. Furthermore, we show that economists closer to one another in informal social networks also share more similar ideologies.
Misha Teplitskiy, Daniel Acuna, Aida Raoult, Konrad Kording, James Evans. (Under review). Network Effects in Judgments of Scientific Validity.
Scientific journals often rely on the judgments of external reviewers, but reviewers may be biased towards authors to whom they are personally connected. Although such biases have been observed in prospective judgments of (uncertain) future performance, it is unknown whether such biases occur in assessments of already completed work, and if so, why. This study presents evidence that personal connections between authors and reviewers of neuroscience research are associated with biased decisions and explores the mechanisms driving the effect. Using the reviews of 7,981 neuroscience manuscripts submitted to the journal PLOS ONE, which evaluates manuscripts only on whether they are scientifically valid, we find that reviewers favored authors close in the co-authorship network by ~0.11 points (1.0 – 4.0 scale) for each step of proximity. PLOS ONE’s validity-focused review and the substantial amount of favoritism shown by distant vs. very distant reviewers, both of whom should have little to gain from nepotism, point to the central role of substantive disagreements between scientists in different “schools of thought.” The findings suggest that removing bias from peer review cannot be accomplished simply by recusing the closest-connected reviewers, and highlights the value of recruiting reviewers embedded in diverse professional networks.
Daniel Acuna, Misha Teplitskiy, James Evans, Konrad Kording. (Under review). Should journals allow authors to suggest reviewers?
James Evans and Misha Teplitskiy (equal authors). (Under review.) How Firm is Sociological Knowledge? Reanalysis of GSS findings with alternative models and out-of-sample data, 1972-2012.
Published findings may be fragile because hypotheses were tailored to fit the data and knowledge about insignificant relationships - "negative knowledge" - remains unreported, or because the world has changed and once robust relationships no longer hold. We reanalyze findings from hundreds of articles that use the General Social Survey, 1972-2012, estimating (1) published models and alternative specifications on in-sample data, and (2) published models on future waves of the GSS. In both, number of significant coefficients, standardized coefficient sizes, and R2 are significantly reduced. Our findings suggest that social scientists are engaged in only moderate data mining, but that they could benefit from more; a bigger concern is the relevance of older published knowledge to the contemporary world.
Frame Search and Re-search: How Quantitative Sociological Articles Change During Peer Review.
Peer review is a central institution in academic publishing, yet its processes and effects on research remain opaque. Empirical studies have (1) been rare because data on the peer review process are generally unavailable, and (2) conceptualized peer review as gate-keepers who either accept or reject a manuscript, overlooking peer review's role in constructing articles. This study uses a unique data resource to study how sociological manuscripts change during peer review. Authors of published sociological research often present earlier versions of that research at annual meetings of the American Sociological Association (ASA). Many of these annual meetings papers are publicly available online and tend to be uploaded before undergoing formal peer review. A data sample is constructed by linking these papers to the respective versions published between 2006 and 2012 in two peer-reviewed journals, American Sociological Review and Social Forces. Quantitative and qualitative analyses examine changes across article versions, paying special attention to how elements of data analysis and theory in the ASA versions change. Results show that manuscripts tend to change more substantially in their theoretical framing than in the data analyses. The finding suggests that a chief effect of peer review in quantitative sociology is to prompt authors to adjust their theoretical framing, a mode or review I call "data-driven." The data-driven mode of review problematizes the vision of sociological research as addressing theoretically motivated questions.
Misha Teplitskiy and Von Bakanic. Do Peer Reviewers Predict Impact?: Evidence from the American Sociological Review, 1977-1982.
Peer review is the premier method of evaluation in academic publishing, but the validity of reviewers' and editors' judgments has long been questioned. Here we investigate how well peer reviews predict. Most previous studies have lacked an external measure of validity and, consequently, compared reviewers' judgment only to each other. These studies find that reviewers disagree frequently, and some have interpreted the disagreement as confirming the common suspicion that reviewers base their judgments on idiosyncratic preferences and allegiances. Reviewers may also disagree about the quality of a manuscript for other reasons, including because the manuscript's quality is on the cusp between acceptability and rejection. Previous studies could not distinguish between the several interpretations of reviewer disagreement and, consequently, the validity of peer review decisions has remained unclear. To rectify this problem, we use historical peer review data from the journal American Sociological Review and compare editorial judgments to an external validity measure - citations. Results indicate that, in the short-term, consensus-accept articles do not substantially outperform those over which reviewers disagreed. However, differences become manifest in the long-term citation trajectories, as consensus-accept articles outperform all others. This finding challenges the common view that peer review decisions are valid only in the short-term and that long-term scientific trajectories are unpredictable.
Diffusion of scientific knowledge
Misha Teplitskiy, Grace Lu, Eamon Duede. Amplifying the Impact of Open Access: Wikipedia and the Diffusion of Science
With the rise of Wikipedia as a first-stop source for scientific knowledge, it is important to compare its representation of that knowledge to that of the academic literature. Here we identify the 250 most heavily used journals in each of 26 research fields (4,721 journals, 19.4M articles in total) indexed by the Scopus database, and test whether topic, academic status, and accessibility make articles from these journals more or less likely to be referenced on Wikipedia. We find that a journal's academic status (impact fac-tor) and accessibility (open access policy) both strongly increase the probability of its being referenced on Wikipedia. Controlling for field and impact factor, the odds that an open access journal is referenced on the English Wikipedia are 47% higher compared to paywall journals. One of the implications of this study is that a major consequence of open access policies is to significantly amplify the diffusion of sci-ence, through an intermediary like Wikipedia, to a broad audience.
Feng Shi, Misha Teplitskiy (equal authors), James Evans, Eamon Duede. (Under review) Wisdom of Politically Polarized Crowds. (Link to article)
As political polarization in the United States continues to rise, the question of whether polarized individuals can fruitfully cooperate becomes pressing. Although diversity of individual perspectives typically leads to superior team performance on complex tasks, strong political perspectives have been associated with conflict, misinformation and a reluctance to engage with people and perspectives beyond one's echo chamber. It is unclear whether self-selected teams of politically diverse individuals will create higher or lower quality outcomes. In this paper, we explore the effect of team political composition on performance through analysis of millions of edits to Wikipedia's Political, Social Issues, and Science articles. We measure editors' political alignments by their contributions to conservative versus liberal articles. A survey of editors validates that those who primarily edit liberal articles identify more strongly with the Democratic party and those who edit conservative ones with the Republican party. Our analysis then reveals that polarized teams---those consisting of a balanced set of politically diverse editors---create articles of higher quality than politically homogeneous teams. The effect appears most strongly in Wikipedia's Political articles, but is also observed in Social Issues and even Science articles. Analysis of article "talk pages" reveals that politically polarized teams engage in longer, more constructive, competitive, and substantively focused but linguistically diverse debates than political moderates. More intense use of Wikipedia policies by politically diverse teams suggests institutional design principles to help unleash the power of politically polarized teams.
Rhetoric of science
(Working paper.) Objectivity in the Philosophical Transactions of the Royal Society, 1665 to 2000.
Sociologists have long debated the possibility of objectivity in social inquiry, often taking for granted that "objectivity" is a stable and concrete concept. Recently, historians of science have argued that conceptions of objectivity, what scientists take to be the proper relationship between a subject and its object, have changed dramatically over the last 350 years. To seventeenth-century British natural philosophers an objective observation was one that exercised careful judgment to perceive nature's essential features; speculation on the causes of phenomena were easily corrupted by the "passions" and were to be avoided. Mid-nineteenth century British scien-tists believed that human judgment corrupted observations; observations should be performed instead by machines. As conceptions of objectivity evolved, so too did conceptions of the scientist's task. Here, I trace the evolution of both by measuring the extent to which authors of scien-tific articles avoided expressions of emotions and the extent to which they explicitly discussed causes of phenomena. Using 54 volumes of the Philosophical Transactions of the Royal Society, a corpus that spans 1665 to 2000, and natural language processing tools, I show that expressions of emotion decline gradually from late seventeenth century to about 1875, when they disappear al-together. The prevalence of discussions of causes, which has been consistently associated with hedging, increases gradually from 1665 to 2000. Emotional distance corresponds well with what Daston and Galison (2007) have called scientific objectivity, but appears to be adopted much more gradually, suggesting large-scale cultural change rather than event-centered explanations usually offered by historians.