Harvard Business Review: Better People Analytics

Better People Analytics

Artificial Intelligence and Ethics

Artificial Intelligence and Ethics

How the Eagles Followed the Numbers to the Super Bowl

How the Eagles Followed the Numbers to the Super Bowl

How People Analytics Can Change Process, Culture, and Strategy

How People Analytics Can Change Process, Culture, and Strategy

University Took Uncommonly Close Look at Student-Conduct Data

Rutgers

Dodgers, Brewers show how analytics is changing baseball

Baseball

Little Privacy in the Workplace of the Future

Little Privacy in the Workplace of the Future

Google's Culture of Self-Surveying

Google

The Resume of the Future

The Resume of the Future

More Academic Articles

What can machine learning do? Workforce implications
Erik Brynjolfsson and Tom Mitchell. 12/22/2017. “What can machine learning do? Workforce implications.” Science, 358, 6370, Pp. 1530-1534. Publisher's VersionAbstract
Digital computers have transformed work in almost every sector of the economy over the past several decades (1). We are now at the beginning of an even larger and more rapid transformation due to recent advances in machine learning (ML), which is capable of accelerating the pace of automation itself. However, although it is clear that ML is a “general purpose technology,” like the steam engine and electricity, which spawns a plethora of additional innovations and capabilities (2), there is no widely shared agreement on the tasks where ML systems excel, and thus little agreement on the specific expected impacts on the workforce and on the economy more broadly. We discuss what we see to be key implications for the workforce, drawing on our rubric of what the current generation of ML systems can and cannot do [see the supplementary materials (SM)]. Although parts of many jobs may be “suitable for ML” (SML), other tasks within these same jobs do not fit the criteria for ML well; hence, effects on employment are more complex than the simple replacement and substitution story emphasized by some. Although economic effects of ML are relatively limited today, and we are not facing the imminent “end of work” as is sometimes proclaimed, the implications for the economy and the workforce going forward are profound.
Small Cues Change Savings Choices
James J.Choi, Emily Haisley, Jennifer Kurkoski, and Cade Massey. 2017. “Small Cues Change Savings Choices.” Behavioral Evidence Hub. Publisher's VersionAbstract

PROJECT SUMMARY

Researchers tested the effects of including cues, anchors, and savings goals in a company email encouraging employee contributions to their 401(k).

IMPACT

Researchers found that providing high contribution rate or savings goal examples, or highlighting high savings thresholds created by the 401(k) plan rules, increased 401(k) contribution rates by 1-2% of income per pay period.

Read More.

Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them
Berkeley Dietvorst, Joseph P. Simmons, and Cade Massey. 6/13/2015. “Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them.” SSRN. Publisher's VersionAbstract
Although evidence-based algorithms consistently outperform human forecasters, people often fail to use them after learning that they are imperfect, a phenomenon known as algorithm aversion. In this paper, we present three studies investigating how to reduce algorithm aversion. In incentivized forecasting tasks, participants chose between using their own forecasts or those of an algorithm that was built by experts. Participants were considerably more likely to choose to use an imperfect algorithm when they could modify its forecasts, and they performed better as a result. Notably, the preference for modifiable algorithms held even when participants were severely restricted in the modifications they could make (Studies 1-3). In fact, our results suggest that participants’ preference for modifiable algorithms was indicative of a desire for some control over the forecasting outcome, and not for a desire for greater control over the forecasting outcome, as participants’ preference for modifiable algorithms was relatively insensitive to the magnitude of the modifications they were able to make (Study 2). Additionally, we found that giving participants the freedom to modify an imperfect algorithm made them feel more satisfied with the forecasting process, more likely to believe that the algorithm was superior, and more likely to choose to use an algorithm to make subsequent forecasts (Study 3). This research suggests that one can reduce algorithm aversion by giving people some control - even a slight amount - over an imperfect algorithm’s forecast.
The Bright Side of Being Prosocial at Work, and the Dark Side, Too
Mark C. Bolino and Adam Grant. 2016. “The Bright Side of Being Prosocial at Work, and the Dark Side, Too.” The Academy of Management Annals. Publisher's VersionAbstract
More than a quarter century ago, organizational scholars began to explore the implications of prosociality in organizations. Three interrelated streams have emerged from this work, which focus on prosocial motives (the desire to benefit others or expend effort out of concern for others), prosocial behaviors (acts that promote/protect the welfare of individuals, groups, or organizations), and prosocial impact (the experience of making a positive difference in the lives of others through one’s work). Prior studies have highlighted the importance of prosocial motives, behaviors, and impact, and have enhanced our understanding of each of them. However, there has been little effort to systematically review and integrate these related lines of work in a way that furthers our understanding of prosociality in organizations. In this article, we provide an overview of the current state of the literature, highlight key findings, identify major research themes, and address important controversies and debates. We call for an expanded view of prosocial behavior and a sharper focus on the costs and unintended consequences of prosocial phenomena. We conclude by suggesting a number of avenues for future research that will address unanswered questions and should provide a more complete understanding of prosociality in the workplace.
More

More Popular Press

A.I. as Talent Scout: Unorthodox Hires, and Maybe Lower Pay
Noam Scheiber. 12/6/2018. “A.I. as Talent Scout: Unorthodox Hires, and Maybe Lower Pay.” The New York Times. Publisher's VersionAbstract

One day this fall, Ashutosh Garg, the chief executive of a recruiting service called Eightfold.ai, turned up a résumé that piqued his interest.

It belonged to a prospective data scientist, someone who unearths patterns in data to help businesses make decisions, like how to target ads. But curiously, the résumé featured the term “data science” nowhere.

Instead, the résumé belonged to an analyst at Barclays who had done graduate work in physics at the University of California, Los Angeles. Though his profile on the social network LinkedIn indicated that he had never worked as a data scientist, Eightfold’s software flagged him as a good fit. He was similar in certain key ways, like his math and computer chops, to four actual data scientists whom Mr. Garg had instructed the software to consider as a model.

The idea is not to focus on job titles, but “what skills they have,” Mr. Garg said. “You’re really looking for people who have not done it, but can do it.”

Read More.

The riddle of experience vs. memory
Daniel Kahneman. 3/1/2010. “The riddle of experience vs. memory .” TEDTalks. TED. Publisher's VersionAbstract
Using examples from vacations to colonoscopies, Nobel laureate and founder of behavioral economics Daniel Kahneman reveals how our "experiencing selves" and our "remembering selves" perceive happiness differently. This new insight has profound implications for economics, public policy -- and our own self-awareness.
More

Meet Your New Boss: An Algorithm

Meet Your New Boss: An Algorithm

A.I. as Talent Scout: Unorthodox Hires, and Maybe Lower Pay

A.I. as Talent Scout: Unorthodox Hires, and Maybe Lower Pay

The Performance Management Revolution

Performance Management

Amazon scrapped 'sexist AI' tool

Amazon AI

Making it easier to discover datasets

Google AI

HR Must Make People Analytics More User-Friendly

HR Must Make People Analytics More User-Friendly

More Harvard Business Review

Better People Analytics
Paul Leonardi and Noshir Contractor. 11/1/2018. “Better People Analytics.” Harvard Business Review. Publisher's VersionAbstract

"We have charts and graphs to back us up. So f*** off.” New hires in Google’s people analytics department began receiving a laptop sticker with that slogan a few years ago, when the group probably felt it needed to defend its work. Back then people analytics—using statistical insights from employee data to make talent management decisions—was still a provocative idea with plenty of skeptics who feared it might lead companies to reduce individuals to numbers. HR collected data on workers, but the notion that it could be actively mined to understand and manage them was novel—and suspect.

Today there’s no need for stickers. More than 70% of companies now say they consider people analytics to be a high priority. The field even has celebrated case studies, like Google’s Project Oxygen, which uncovered the practices of the tech giant’s best managers and then used them in coaching sessions to improve the work of low performers. Other examples, such as Dell’s experiments with increasing the success of its sales force, also point to the power of people analytics.

But hype, as it often does, has outpaced reality. The truth is, people analytics has made only modest progress over the past decade. A survey by Tata Consultancy Services found that just 5% of big-data investments go to HR, the group that typically manages people analytics. And a recent study by Deloitte showed that although people analytics has become mainstream, only 9% of companies believe they have a good understanding of which talent dimensions drive performance in their organizations.

What gives? If, as the sticker says, people analytics teams have charts and graphs to back them up, why haven’t results followed? We believe it’s because most rely on a narrow approach to data analysis: They use data only about individual people, when data about the interplay among people is equally or more important.

People’s interactions are the focus of an emerging discipline we call relational analytics. By incorporating it into their people analytics strategies, companies can better identify employees who are capable of helping them achieve their goals, whether for increased innovation, influence, or efficiency. Firms will also gain insight into which key players they can’t afford to lose and where silos exist in their organizations.

Most people analytics teams rely on a narrow approach to data analysis.

Fortunately, the raw material for relational analytics already exists in companies. It’s the data created by e-mail exchanges, chats, and file transfers—the digital exhaust of a company. By mining it, firms can build good relational analytics models.

In this article we present a framework for understanding and applying relational analytics. And we have the charts and graphs to back us up.

Read More.

 

The New Analytics of Culture
Matthew Corritore, Amir Goldberg, and Sameer Srivastava. 1/31/2020. “The New Analytics of Culture.” Harvard Business Review. Publisher's VersionAbstract
A business’s culture can catalyze or undermine success. Yet the tools available for measuring it—namely, employee surveys and questionnaires—have significant shortcomings. Employee self-reports are often unreliable. The values and beliefs that people say are important to them, for example, are often not reflected in how they actually behave. Moreover, surveys provide static, or at best episodic, snapshots of organizations that are constantly evolving. And they’re limited by researchers’ tendency to assume that distinctive and idiosyncratic cultures can be neatly categorized into a few common types.
Why "Many-Model-Thinkers" Make Better Decisions
Scott E. Page. 11/19/2018. “Why "Many-Model-Thinkers" Make Better Decisions.” Harvard Business Review. Publisher's VersionAbstract

Without models, making sense of data is hard. Data helps describe reality, albeit imperfectly. On its own, though, data can’t recommend one decision over another. If you notice that your best-performing teams are also your most diverse, that may be interesting. But to turn that data point into insight, you need to plug it into some model of the world — for instance, you may hypothesize that having a greater variety of perspectives on a team leads to better decision-making. Your hypothesis represents a model of the world.

Though single models can perform well, ensembles of models work even better. That is why the best thinkers, the most accurate predictors, and the most effective design teams use ensembles of models. They are what I call, many-model thinkers.

More