Just as physics gained public visibility and ideological contention as it matured over the twentieth century, so genomic science will gain public visibility and competing normative valences as it becomes increasingly important during the twenty-first century. We begin by describing Americans’ level of technology optimism or pessimism across four arenas in genomic science and one arena (climate change) outside genomic science. We then ask “so what?” – do people who perceive more harm than good in genomic science hold different policy preferences from those who perceive more good than harm? Do optimists and pessimists differ in their perceptions of elite actors, or their willingness to be directly involved with the new science? Finally, we consider variations within the public. Is knowledge about genetics associated with more optimism about genomic science? Are people with direct interests in one arena of genomics more optimistic (or pessimistic) about its future than they are about other arenas? Do religiosity or characteristics such as race or gender play a role in levels of optimism about genomics in general or particular genomics arenas?
We conclude, first, that public attitudes toward genomic science are coherent and intelligible, perhaps surprisingly so given how new and complex the substantive issues are, and, second, that citizens differ from most social scientists, legal scholars, and policy advocates in their overall embrace of genomics’ possibilities for benefitting society.
The last five years have seen an explosion in the amount of data available to social scientists. Although a blessing, these extremely large sources of data can cause problems for political scientists working with standard statistical software programs, which are poorly suited to analyzing big data sets. In this essay, we describe a few approaches to handling extremely large datasets within the R programming language, both at the command line prior to R and after we fire up R. We show that handling large datasets is about either (1) choosing tools that can shrink the problem or (2) fine-tuning R to handle massive data files.
The recent subprime mortgage crisis has brought to the forefront the possibility of discriminatory lending on the basis of race or gender. Using the over 10 million observations collected by the federal government in 2006 through the Home Mortgage Disclosure Act, this paper explores these claims causally. In so doing, the paper explores two possible theories of discrimination: (1) that any discriminatory lending patterns are picking up the fact that minority borrowers went to different lenders, perhaps as a result of predatory lending, and (2) the possibility that individual lenders discriminated against identically situated borrowers. The results presented provide limited evidence for the idea that borrowers of different races went to different lenders, but only in certain regions of the country and only for certain minority groups. In addition, many of these results are sensitive to missing confounders – e.g., financial data like credit scores and down payments, which the federal government does not collect. Ultimately, the results’ sensitivity suggests that more data gathering is in order before definitive assertions can be made by legal and policy actors.
rodrikdaniRemember when economists didn't write books? (I do.) What the hell happened in recent years? What's the most parsimonious explanation for this outpouring of amazingly readable books by so many great economists?
maya_sen@JoshuaSGoodman Best line: "some reviewers can also have really high standards in a way that creates lots of Type II errors—never accepting a paper."
and by "best" I mean "truest and most painful"