In examining the diffusion of social and political phenomena like regime transition, conflict,
and policy change, scholars routinely make choices about how proximity is defined and which
neighbors should be considered more important than others. Since each specification offers an
alternative view of the networks through which diffusion can take place, one's decision can exert
a significant influence on the magnitude and scope of estimated diffusion effects. This problem is
widely recognized, but is rarely the subject of direct analysis. In international relations research,
connectivity choices are usually ad hoc, driven more by data availability than by theoretically informed
decision criteria. We take a closer look at the assumptions behind these choices, and
propose a more systematic method to asses the structural similarity of two or more alternative
networks, and select one that most plausibly relates theory to empirics. We apply this method
to the spread of democratic regime change, and offer an illustrative example of how neighbor
choices might impact predictions and inferences in the case of the 2011 Arab Spring.
Politics and political conflict often occur in the written and spoken word. Scholars have long recognized this, but the massive costs of analyzing even moderately sized collections of texts have prevented political scientists from using texts in their research. Here lies the promise of automated text analysis: it substantially reduces the costs of analyzing large collections of text. We provide a guide to this exciting new area of research and show how, in many instances, the methods have already obtained part of their promise. But there are pitfalls to using automated methods. Automated text methods are useful, but incorrect, models of language: they are no substitute for careful thought and close reading. Rather, automated text methods augment and amplify human reading abilities. Using the methods requires extensive validation in any one application. With these guiding principles to using automated methods, we clarify misconceptions and errors in the literature and identify open questions in the application of automated text analysis in political science. For scholars to avoid the pitfalls of automated methods, methodologists need to develop new methods specifically for how social scientists use quantitative text methods.
In most areas of life individuals communicate either by writing or by speaking. Yet, inmany experiments/surveys subjects communicate by choosing options along a scale orin pre-established categories. These constructs, while helpful for many reasons, mightnot represent an individual's actual views or state of mind. If the same individual wereforced to explain their position, then they might report something else. But relativelyfew studies collect written responses, and when they do they are rarely analyzed becauseit is assumed that human coding is required. We introduce an alternative approach thatincorporates information about the text, such as the author's gender, country of origin,treatment status, or when something was written. Our innovation is to take advantage ofthe Structural Topic Model introduced by Roberts et al. (2012) to make analyzing openended responses easier, more revealing, and capable of being used to estimate treatmenteects and compliance with encouragement designs. We illustrate these innovations withseveral experiments.
Russia’s intervention in the Georgian–South Ossetian conflict has
highlighted the need to rigorously examine trends in the public debate
over the use of force in Russia. Approaching this debate through the prism of
civil–military relations, we take advantage of recent methodological
advances in automated content analysis and generate a new dataset of 8000
public statements made by Russia’s political and military leaders during the
Putin period. The data show little evidence that military elites exert
a restraining influence on Russian foreign and defence policy. Although more
hesitant than their political counterparts to embrace an interventionist foreign
policy agenda, Russian military elites are considerably more activist in
considering the use of force as an instrument of foreign policy.
This study addresses the factors that lead individuals to flee their homes in search
of refuge. Many argue that individuals abandon their homes in favor of an uncertain
life elsewhere because of economic hardship, while others argue that threats to their
lives, physical person, and liberty cause them to flee. This study engages the debate
by analyzing flight patterns over time from Haiti to the United States as a function of
economic and security factors. Which factors have the largest influence on Haitian-U.S.
migratory patterns? Our results show that both economics and security play a role.
However, our analyses are able to distinguish between the effects of different individual
economic and security indicators on Haitian-U.S. migration.
This study predicts forced migration events by predicting the civil violence,
poor economic conditions, and foreign interventions known to cause
individuals to flee their homes in search of refuge. If we can predict forced
migration, policy-makers can better plan for humanitarian crises. While the
study is limited to predicting Haitian flight to the United States, its strength is
its ability to predict weekly flows as opposed to annual flows, providing a
greater level of predictive detail than its ‘country-year’ counterparts. We
focus on Haiti given that it exhibits most, if not all, of the independent
variables included in theories and models of forced migration. Within our
temporal domain (1994–2004), Haiti experienced economic instability, low intensity
civil conflict, state repression, rebel dissent, and foreign intervention
and influence. Given the model’s performance, the study calls for the
collection of disaggregated data in additional countries to provide more
precise and useful early-warning models of forced migrant events.
This paper examines the effects of source bias on statistical
inferences drawn from event data analyses. Most event data projects
use a single source to code events. For example most of the early
Kansas Event Data System (KEDS) datasets code only Reuters and
Agence France Presse (AFP) reports. One of the goals of Project
Civil Strife (PCS) –a new internal conflict-cooperation event data
project– is to code event data from several news sources to garner the
most extensive coverage of events and control for bias often found
in a single source. Herein, we examine the effects that source bias
has on the inferences we draw from statistical time-series models.
In this study, we examine domestic political conflict in Indonesia
and Cambodia from 1980-2004 using automated content analyzed
datasets collected from multiple sources (i.e. Associated Press,
British Broadcasting Corporation, Japan Economic Newswire, United
Press International, and Xinhua). The analyses show that we draw
different inferences across sources, especially when we disaggregate
domestic political groups. We then combine our sources together
and eliminate duplicate events to create a multi-source dataset and
compare the results to the single-source models. We conclude that
there are important differences in the inferences drawn dependent
upon source use. Therefore, researchers should (1) check their
results across multiple sources and/or (2) analyze multi-source data
to test hypotheses when possible.
This paper was written for the re-launch of The Monitor, an undergraduate research journal at the College of William and Mary. Dan Maliniak and I were invited by the editorial board to write the introductory author for the first issue. We chose to write about the principles of strong social science research geared towards interested undergraduates.