Decision Making

The Model Thinker: What You Need to Know to Make Data Work for You
Scott E. Page. 11/27/2018. The Model Thinker: What You Need to Know to Make Data Work for You, Pp. 448. Publisher's VersionAbstract
The Many-Model Thinker 

This is a book about models. It describes dozens of models in straightforward language and explains how to apply them. Models are formal structures represented in mathematics and diagrams that help us to understand the world. Mastery of models improves your ability to reason, explain, design, communicate, act, predict, and explore.

This book promotes a many-model thinking approach: the application of ensembles of models to make sense of complex phenomena. The core idea is that many-model thinking produces wisdom through a diverse ensemble of logical frames. The various models accentuate different causal forces. Their insights and implications overlap and interweave. By engaging many models as frames, we develop nuanced, deep understandings. The book includes formal arguments to make the case for multiple models along with myriad real-world examples.

The book has a pragmatic focus. Many-model thinking has tremendous practical value. Practice it, and you will better understand complex phenomena. You will reason better. You exhibit fewer gaps in your reasoning and make more robust decisions in your career, community activities, and personal life. You may even become wise.

Twenty-five years ago, a book of models would have been intended for professors and graduate students studying business, policy, and the social sciences along with financial analysts, actuaries, and members of the intelligence community. These were the people who applied models and, not coincidentally, they were also the people most engaged with large data sets. Today, a book of models has a much larger audience: the vast universe of knowledge workers, who, owing to the rise of big data, now find working with models a part of their daily lives.

Organizing and interpreting data with models has become a core competency for business strategists, urban planners, economists, medical professionals, engineers, actuaries, and environmental scientists among others. Anyone who analyzes data, formulates business strategies, allocates resources, designs products and protocols, or makes hiring decisions encounters models. It follows that mastering the material in this book—particularly the models covering innovation, forecasting, data binning, learning, and market entry timing—will be of practical value to many.

Thinking with models will do more than improve your performance at work. It will make you a better citizen and a more thoughtful contributor to civic life. It will make you more adept at evaluating economic and political events. You will be able to identify flaws in your logic and in that of others. You will learn to identify when you are allowing ideology to supplant reason and have richer, more layered insights into the implications of policy initiatives, whether they be proposed greenbelts or mandatory drug tests.

These benefits will accrue from an engagement with a variety of models—not hundreds, but a few dozen. The models in this book offer a good starting collection. They come from multiple disciplines and include the Prisoners’ Dilemma, the Race to the Bottom, and the SIR model of disease transmission. All of these models share a common form: they assume a set of entities—often people or organizations—and describe how they interact.

The models we cover fall into three classes: simplifications of the world, mathematical analogies, and exploratory, artificial constructs. In whatever form, a model must be tractable. It must be simple enough that within it we can apply logic. For example, we cover a model of communicable diseases that consists of infected, susceptible, and recovered people that assumes a rate of contagion. Using the model we can derive a contagion threshold, a tipping point, above which the disease spreads. We can also determine the proportion of people we must vaccinate to stop the disease from spreading.

As powerful as single models can be, a collection of models accomplishes even more. With many models, we avoid the narrowness inherent in each individual model. A many-models approach illuminates each component model’s blind spots. Policy choices made based on single models may ignore important features of the world such as income disparity, identity diversity, and interdependencies with other systems.1 With many models, we build logical understandings of multiple processes. We see how causal processes overlap and interact. We create the possibility of making sense of the complexity that characterizes our economic, political, and social worlds. And, we do so without abandoning rigor—model thinking ensures logical coherence. That logic can be then be grounded in evidence by taking models to data to test, refine, and improve them. In sum, when our thinking is informed by diverse logically consistent, empirically validated frames, we are more likely to make wise choices.

 

Read More.

Small Cues Change Savings Choices
James J.Choi, Emily Haisley, Jennifer Kurkoski, and Cade Massey. 2017. “Small Cues Change Savings Choices.” Behavioral Evidence Hub. Publisher's VersionAbstract

PROJECT SUMMARY

Researchers tested the effects of including cues, anchors, and savings goals in a company email encouraging employee contributions to their 401(k).

IMPACT

Researchers found that providing high contribution rate or savings goal examples, or highlighting high savings thresholds created by the 401(k) plan rules, increased 401(k) contribution rates by 1-2% of income per pay period.

Read More.

Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them
Berkeley Dietvorst, Joseph P. Simmons, and Cade Massey. 6/13/2015. “Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them.” SSRN. Publisher's VersionAbstract
Although evidence-based algorithms consistently outperform human forecasters, people often fail to use them after learning that they are imperfect, a phenomenon known as algorithm aversion. In this paper, we present three studies investigating how to reduce algorithm aversion. In incentivized forecasting tasks, participants chose between using their own forecasts or those of an algorithm that was built by experts. Participants were considerably more likely to choose to use an imperfect algorithm when they could modify its forecasts, and they performed better as a result. Notably, the preference for modifiable algorithms held even when participants were severely restricted in the modifications they could make (Studies 1-3). In fact, our results suggest that participants’ preference for modifiable algorithms was indicative of a desire for some control over the forecasting outcome, and not for a desire for greater control over the forecasting outcome, as participants’ preference for modifiable algorithms was relatively insensitive to the magnitude of the modifications they were able to make (Study 2). Additionally, we found that giving participants the freedom to modify an imperfect algorithm made them feel more satisfied with the forecasting process, more likely to believe that the algorithm was superior, and more likely to choose to use an algorithm to make subsequent forecasts (Study 3). This research suggests that one can reduce algorithm aversion by giving people some control - even a slight amount - over an imperfect algorithm’s forecast.
Deliberate Practice Spells Success: Why Grittier Competitors Triumph at the National Spelling Bee
Angela Lee Duckworth, Teri A. Kirby, Eli Tsukayama, Heather Berstein, and K. Anders Ericsson. 10/4/2010. “Deliberate Practice Spells Success: Why Grittier Competitors Triumph at the National Spelling Bee.” Social Psychological and Personality Science, 2, 2, Pp. 174–181. Publisher's VersionAbstract

The expert performance framework distinguishes between deliberate practice and less effective practice activities. The current longitudinal study is the first to use this framework to understand how children improve in an academic skill. Specifically, the authors examined the effectiveness and subjective experience of three preparation activities widely recommended to improve spelling skill. Deliberate practice, operationally defined as studying and memorizing words while alone, better predicted performance in the National Spelling Bee than being quizzed by others or reading for pleasure. Rated as the most effortful and least enjoyable type of preparation activity, deliberate practice was increasingly favored over being quizzed as spellers accumulated competition experience. Deliberate practice mediated the prediction of final performance by the personality trait of grit, suggesting that perseverance and passion for long-term goals enable spellers to persist with practice activities that are less intrinsically rewarding—but more effective—than other types of preparation.

Read More.

Doing Better but Feeling Worse: Looking for the “Best” Job Undermines Satisfaction
Sheena S. Iyengar, Rachael E. Wells, and Barry Schwartz. 2/1/2006. “Doing Better but Feeling Worse: Looking for the “Best” Job Undermines Satisfaction.” Psychological Science, 17, 2, Pp. 143–150. Publisher's VersionAbstract

Expanding upon Simon's (1955) seminal theory, this investigation compared the choice-making strategies of maximizers and satisficers, finding that maximizing tendencies, although positively correlated with objectively better decision outcomes, are also associated with more negative subjective evaluations of these decision outcomes. Specifically, in the fall of their final year in school, students were administered a scale that measured maximizing tendencies and were then followed over the course of the year as they searched for jobs. Students with high maximizing tendencies secured jobs with 20% higher starting salaries than did students with low maximizing tendencies. However, maximizers were less satisfied than satisficers with the jobs they obtained, and experienced more negative affect throughout the job-search process. These effects were mediated by maximizers' greater reliance on external sources of information and their fixation on realized and unrealized options during the search and selection process.

See paper summary attached, to read full article, click here

Necessary Evils and Interpersonal Sensitivity in Organizations
Joshua Margolis and Andrew Molinsky. 4/1/2005. “Necessary Evils and Interpersonal Sensitivity in Organizations.” The Academy of Management Review, 30, 2, Pp. 245-268. Publisher's VersionAbstract

In order to produce a beneficial result, professionals must sometimes cause harm to another human being. To capture this phenomenon, we introduce the construct of "necessary evils" and explore the inherent challenges such tasks pose for those who must perform them. Whereas previous research has established the importance of treating victims of necessary evils with interpersonal sensitivity, we focus on the challenges performers face when attempting to achieve this prescribed standard in practice.

See paper summary below, link to full article here

How People Analytics Can Help You Change Process, Culture, and Strategy
Chantrelle Nielsen and Natalie McCullough. 5/17/2018. “How People Analytics Can Help You Change Process, Culture, and Strategy.” Harvard Business Review. Publisher's VersionAbstract

It seems like every business is struggling with the concept of transformation. Large incumbents are trying to keep pace with digital upstarts., and even digital native companies born as disruptors know that they need to transform. Take Uber: at only eight years old, it’s already upended the business model of taxis. Now it’s trying to move from a software platform to a robotics lab to build self-driving cars.

And while the number of initiatives that fall under the umbrella of “transformation” is so broad that it can seem meaningless, this breadth is actually one of the defining characteristic that differentiates transformation from ordinary change. A transformation is a whole portfolio of change initiatives that together form an integrated program.

And so a transformation is a system of systems, all made up of the most complex system of all — people. For this reason, organizational transformation is uniquely suited to the analysis, prediction, and experimental research approach of the people analytics field.

People analytics — defined as the use of data about human behavior, relationships and traits to make business decisions — helps to replace decision making based on anecdotal experience, hierarchy and risk avoidance with higher-quality decisions based on data analysis, prediction, and experimental research. In working with several dozen Fortune 500 companies with Microsoft’s Workplace Analytics division, we’ve observed companies using people analytics in three main ways to help understand and drive their transformation efforts.

Read More.

A University Took an Uncommonly Close Look at Its Student-Conduct Data. Here’s What It Found.
Dan Bauman. 8/28/2018. “A University Took an Uncommonly Close Look at Its Student-Conduct Data. Here’s What It Found.” The Chronicle of Higher Education. Publisher's VersionAbstract
When colleges try to understand their students, they resort to a common tool: the survey.

And surveys are fine, says Dayna Weintraub, director of student-affairs research and assessment at Rutgers University at New Brunswick. But she also recognizes their drawbacks: poor response rates, underrepresentation of particular demographic groups, and, in certain instances, answers that lack needed candor.

And so, to assess and change student conduct in a more effective way, Weintraub and her colleagues have tried a new approach: find existing, direct, and detailed data on how Rutgers students conduct themselves, and combine them.

Leading the effort was Kevin Pitt, director of student conduct at the New Jersey university. Working alongside Weintraub, he and his team analyzed, with granular specificity, the behavior patterns of students in a variety of contexts: consuming excessive alcohol or drugs, in questionable sexual situations, and others. Pitt and his team examined student-level trends within those areas, combining a variety of previously siloed databases to sketch a more-informative picture of student life at Rutgers.

Read More.

Beyond prediction: Using big data for policy problems
Susan Athey. 2/3/2017. “Beyond prediction: Using big data for policy problems.” Science, 355, Pp. 483-485.Abstract
Machine-learning prediction methods have been extremely productive in applications ranging from medicine to allocating fire and health inspectors in cities. However, there are a number of gaps between making a prediction and making a decision, and underlying assumptions need to be understood in order to optimize data-driven decision-making.
Read More.