Publications

Journal Article
Shen, Zejiang, Jian Zhao, Weining Li, Yaoliang Yu, and Melissa Dell. “OLALA: Object-Level Active Learning Based Layout Annotation.” EMNLP Computational Social Science Workshop (Forthcoming): 2023.Abstract
Layout detection is an essential step for accurately extracting structured contents from historical documents. The intricate and varied layouts present in these document images make it expensive to label the numerous layout regions that can be densely arranged on each page. Current active learning methods typically rank and label samples at the image level, where the annotation budget is not optimally spent due to the overexposure of common objects per image. Inspired by recent progress in semi-supervised learning and self-training, we propose Olala, an Object-Level Active Learning framework for efficient document layout Annotation. Olala aims to optimize the annotation process by selectively annotating only the most ambiguous regions within an image, while using automatically generated labels for the rest. Central to Olala is a perturbation-based scoring function that determines which objects require manual annotation. Extensive experiments show that Olala can significantly boost model performance and improve annotation efficiency, facilitating the extraction of masses of structured text for downstream NLP applications.
Paper
Shen, Zejiang, Ruochen Zhang, Melissa Dell, Benjamin Lee, Jacob Carlson, and Weining Li. “LayoutParser: A Unified Toolkit for Deep Learning Based Document Image Analysis.” International Conference on Document Analysis and Recognition (2021): 131--146. Article PDFAbstract
Recent advances in document image analysis (DIA) have been primarily driven by the application of neural networks. Ideally, research outcomes could be easily deployed in production and extended for further investigation. However, various factors like loosely organized codebases and sophisticated model configurations complicate the easy reuse of important innovations by a wide audience. Though there have been on-going efforts to improve reusability and simplify deep learning (DL) model development in disciplines like natural language processing and computer vision, none of them are optimized for challenges in the domain of DIA. This represents a major gap in the existing toolkit, as DIA is central to academic research across a wide range of disciplines in the social sciences and humanities. This paper introduces LayoutParser, an open-source library for streamlining the usage of DL in DIA research and applications. The core LayoutParser library comes with a set of simple and intuitive interfaces for applying and customizing DL models for layout detection, character recognition, and many other document processing tasks. To promote extensibility, LayoutParser also incorporates a community platform for sharing both pre-trained models and full document digitization pipelines. We demonstrate that LayoutParser is helpful for both lightweight and large-scale digitization pipelines in real-word use cases. The library is publicly available at https://layout-parser.github.io .
Shen, Zejiang, Kaixuan Zhang, and Melissa Dell. “A Large Dataset of Historical Japanese Documents with Complex Layouts.” IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (2020): 548-559. DatasetAbstract
Deep learning-based approaches for automatic document layout analysis and content extraction have the potential to unlock rich information trapped in historical documents on a large scale. One major hurdle is the lack of large datasets for training robust models. In particular, little training data exist for Asian languages. To this end, we present HJDataset, a Large Dataset of Historical Japanese Documents with Complex Layouts. It contains over 250,000 layout element annotations of seven types. In addition to bounding boxes and masks of the content regions, it also includes the hierarchical structures and reading orders for layout elements. The dataset is constructed using a combination of human and machine efforts. A semi-rule based method is developed to extract the layout elements, and the results are checked by human inspectors. The resulting large-scale dataset is used to provide baseline performance analyses for text region detection using state-of-the-art deep learning models. And we demonstrate the usefulness of the dataset on real-world document digitization tasks. The dataset is available at this https URL.
Paper
Dell, Melissa, and Benjamin Olken. “The Development Effects of the Extractive Colonial Economy: The Dutch Cultivation System in Java.” Review of Economic Studies 87, no. 1 (2020): 164-203. Replication files Paper Appendix
Zhang, Kaixuan, Zejiang Shen, Jie Zhou, and Melissa Dell. “Information Extraction from Text Regions with Complex Tabular Structure.” Conference on Neural Information Processing Systems Document Intelligence Workshop (2019). Paper
Dell, Melissa, Benjamin Feigenberg, and Kensuke Teshima. “The Violent Consequences of Trade-Induced Worker Displacement in Mexico.” American Economic Review: Insights 1, no. 1 (2019): 43-58. Paper Appendix Replication Files
Dell, Melissa, and Pablo Querubin. “Nation Building Through Foreign Intervention: Evidence from Discontinuities in Military Strategies.” Quarterly Journal of Economics 133, no. 2 (2018): 701-764. Paper Appendix Replication files
Dell, Melissa, Nathan Lane, and Pablo Querubin. “The Historical State, Local Collective Action, and Economic Development in Vietnam.” Econometrica 86, no. 6 (2018): 2083-2121. Paper Published Appendix Online Appendix Replication files
Dell, Melissa. “Trafficking Networks and the Mexican Drug War.” American Economic Review 105, no. 6 (2015): 1738-1779. Paper Appendix Replication files

 

Dell, M., B. Jones, and B. Olken. “What Do We Learn from the Weather? The New Climate-Economy Literature.” Journal of Economic Literature (2014). Paper Appendix
Dell, M, B Jones, and B Olken. “Temperature Shocks and Economic Growth: Evidence from the Last Half Century.” American Economic Journal: Macroeconomics 4, no. 3 (2012): 66-95. PDF Online appendix Data/program files
Dell, M, and D Acemoglu. “Productivity Differences Between and Within Countries.” American Economic Journal: Macroeconomics 2, no. 1 (2010): 169–188. PDF Online appendix Data/program files
Dell, M. “The Persistent Effects of Peru's Mining Mita.” Econometrica 78, no. 6 (2010): 1863-1903. PDF Online appendix Spanish translation Data/program files 1 Data/program files 2 Data/program files 3
Dell, M., B. Jones, and B. Olken. “Temperature and Income: Reconciling New Cross-Sectional and Panel Estimates.” American Economic Review Papers and Proceedings 99, no. 2 (2009): 198-204. Paper Online appendix
Working Paper
Silcock, Emily, Luca D’Amico-Wong, Jinglin Yang, and Melissa Dell. “Noise-Robust De-Duplication at Scale,” Working Paper.Abstract
Identifying near duplicates within large, noisy text corpora has a myriad of applications that range from de-duplicating training datasets, reducing privacy risk, and evaluating test set leakage, to identifying reproduced news articles and literature within large corpora. Across these diverse applications, the overwhelming majority of work relies on N-grams. Limited efforts have been made to evaluate how well N-gram methods perform, in part because it is unclear how one could create an unbiased evaluation dataset for a massive corpus. This study uses the unique timeliness of historical news wires to create a 27,210 document dataset, with 122,876 positive duplicate pairs, for studying noise-robust de-duplication. The time-sensitivity of news makes comprehensive hand labelling feasible - despite the massive overall size of the corpus - as duplicates occur within a narrow date range. The study then develops and evaluates a range of de-duplication methods: hashing and N-gram overlap (which predominate in the literature), a contrastively trained bi-encoder, and a ``re-rank'' style approach combining a bi- and cross-encoder. The neural approaches significantly outperform hashing and N-gram overlap. We show that the bi-encoder scales well, de-duplicating a 10 million article corpus on a single GPU card in a matter of hours. We also apply our pre-trained model to the RealNews and patent portions of C4 (Colossal Clean Crawled Corpus), illustrating that a neural approach can identify many near duplicates missed by hashing, in the presence of various types of noise. The public release of our NEWS-COPY de-duplication dataset, de-duplicated RealNews and patent corpuses, and the pre-trained models will facilitate further research and applications. 
Paper
Dell, M.Path Dependence in Development: Evidence from the Mexican Revolution,” Working Paper. PDF Data appendix