Harvard Business Review: Better People Analytics

Better People Analytics

Artificial Intelligence and Ethics

Artificial Intelligence and Ethics

How the Eagles Followed the Numbers to the Super Bowl

How the Eagles Followed the Numbers to the Super Bowl

How People Analytics Can Change Process, Culture, and Strategy

How People Analytics Can Change Process, Culture, and Strategy

University Took Uncommonly Close Look at Student-Conduct Data

Rutgers

Dodgers, Brewers show how analytics is changing baseball

Baseball

Little Privacy in the Workplace of the Future

Little Privacy in the Workplace of the Future

Google's Culture of Self-Surveying

Google

The Resume of the Future

The Resume of the Future

More Academic Articles

Small Cues Change Savings Choices
James J.Choi, Emily Haisley, Jennifer Kurkoski, and Cade Massey. 2017. “Small Cues Change Savings Choices.” Behavioral Evidence Hub. Publisher's VersionAbstract

PROJECT SUMMARY

Researchers tested the effects of including cues, anchors, and savings goals in a company email encouraging employee contributions to their 401(k).

IMPACT

Researchers found that providing high contribution rate or savings goal examples, or highlighting high savings thresholds created by the 401(k) plan rules, increased 401(k) contribution rates by 1-2% of income per pay period.

Read More.

Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them
Berkeley Dietvorst, Joseph P. Simmons, and Cade Massey. 6/13/2015. “Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them.” SSRN. Publisher's VersionAbstract
Although evidence-based algorithms consistently outperform human forecasters, people often fail to use them after learning that they are imperfect, a phenomenon known as algorithm aversion. In this paper, we present three studies investigating how to reduce algorithm aversion. In incentivized forecasting tasks, participants chose between using their own forecasts or those of an algorithm that was built by experts. Participants were considerably more likely to choose to use an imperfect algorithm when they could modify its forecasts, and they performed better as a result. Notably, the preference for modifiable algorithms held even when participants were severely restricted in the modifications they could make (Studies 1-3). In fact, our results suggest that participants’ preference for modifiable algorithms was indicative of a desire for some control over the forecasting outcome, and not for a desire for greater control over the forecasting outcome, as participants’ preference for modifiable algorithms was relatively insensitive to the magnitude of the modifications they were able to make (Study 2). Additionally, we found that giving participants the freedom to modify an imperfect algorithm made them feel more satisfied with the forecasting process, more likely to believe that the algorithm was superior, and more likely to choose to use an algorithm to make subsequent forecasts (Study 3). This research suggests that one can reduce algorithm aversion by giving people some control - even a slight amount - over an imperfect algorithm’s forecast.
The Bright Side of Being Prosocial at Work, and the Dark Side, Too
Mark C. Bolino and Adam Grant. 2016. “The Bright Side of Being Prosocial at Work, and the Dark Side, Too.” The Academy of Management Annals. Publisher's VersionAbstract
More than a quarter century ago, organizational scholars began to explore the implications of prosociality in organizations. Three interrelated streams have emerged from this work, which focus on prosocial motives (the desire to benefit others or expend effort out of concern for others), prosocial behaviors (acts that promote/protect the welfare of individuals, groups, or organizations), and prosocial impact (the experience of making a positive difference in the lives of others through one’s work). Prior studies have highlighted the importance of prosocial motives, behaviors, and impact, and have enhanced our understanding of each of them. However, there has been little effort to systematically review and integrate these related lines of work in a way that furthers our understanding of prosociality in organizations. In this article, we provide an overview of the current state of the literature, highlight key findings, identify major research themes, and address important controversies and debates. We call for an expanded view of prosocial behavior and a sharper focus on the costs and unintended consequences of prosocial phenomena. We conclude by suggesting a number of avenues for future research that will address unanswered questions and should provide a more complete understanding of prosociality in the workplace.
Shifts and Ladders: Comparing the Role of Internal and External Mobility in Managerial Careers
Matthew Bidwell and Ethan Mollick. 10/5/2015. “Shifts and Ladders: Comparing the Role of Internal and External Mobility in Managerial Careers.” Organization Science, 26, 6, Pp. 1553-1804. Publisher's VersionAbstract
Employees can build their careers either by moving into a new job within their current organization or else by moving to a different organization. We use matching perspectives on job mobility to develop predictions about the different roles that those internal and external moves will play within careers. Using data on the careers of master of business administration alumni, we show how internal and external mobility are associated with very different rewards: upward progression into a job with greater responsibilities is much more likely to happen through internal mobility than external mobility; yet despite this difference, external moves offer similar increases in pay to internal, as employers seek to attract external hires. Consistent with our arguments, we also show that the pay increases associated with external moves are lower when the moves take place for reasons other than career advancement, such as following a layoff or when moving into a different kind of work. Despite growing interest in boundaryless careers, our findings indicate that internal and external mobility play very different roles in executives’ careers, with upward mobility still happening overwhelmingly within organizations.
More

More Popular Press

Artificial Intelligence and Ethics
Jonathan Shaw. 1/2019. “Artificial Intelligence and Ethics.” Harvard Magazine. Publisher's VersionAbstract

ON MARCH 18, 2018, at around 10 P.M., Elaine Herzberg was wheeling her bicycle across a street in Tempe, Arizona, when she was struck and killed by a self-driving car. Although there was a human operator behind the wheel, an autonomous system—artificial intelligence—was in full control. This incident, like others involving interactions between people and AI technologies, raises a host of ethical and proto-legal questions. What moral obligations did the system’s programmers have to prevent their creation from taking a human life? And who was responsible for Herzberg’s death? The person in the driver’s seat? The company testing the car’s capabilities? The designers of the AI system, or even the manufacturers of its onboard sensory equipment?

“Artificial intelligence” refers to systems that can be designed to take cues from their environment and, based on those inputs, proceed to solve problems, assess risks, make predictions, and take actions. In the era predating powerful computers and big data, such systems were programmed by humans and followed rules of human invention, but advances in technology have led to the development of new approaches. One of these is machine learning, now the most active area of AI, in which statistical methods allow a system to “learn” from data, and make decisions, without being explicitly programmed. Such systems pair an algorithm, or series of steps for solving a problem, with a knowledge base or stream—the information that the algorithm uses to construct a model of the world.

Ethical concerns about these advances focus at one extreme on the use of AI in deadly military drones, or on the risk that AI could take down global financial systems. Closer to home, AI has spurred anxiety about unemployment, as autonomous systems threaten to replace millions of truck drivers, and make Lyft and Uber obsolete. And beyond these larger social and economic considerations, data scientists have real concerns about bias, about ethical implementations of the technology, and about the nature of interactions between AI systems and humans if these systems are to be deployed properly and fairly in even the most mundane applications.

Consider a prosaic-seeming social change: machines are already being given the power to make life-altering, everyday decisions about people. Artificial intelligence can aggregate and assess vast quantities of data that are sometimes beyond human capacity to analyze unaided, thereby enabling AI to make hiring recommendations, determine in seconds the creditworthiness of loan applicants, and predict the chances that criminals will re-offend.

But such applications raise troubling ethical issues because AI systems can reinforce what they have learned from real-world data, even amplifying familiar risks, such as racial or gender bias. Systems can also make errors of judgment when confronted with unfamiliar scenarios. And because many such systems are “black boxes,” the reasons for their decisions are not easily accessed or understood by humans—and therefore difficult to question, or probe.

Read More.

Artificial Intelligence's 'Black Box' Is Nothing to Fear
Vijay Pande. 1/25/2018. “Artificial Intelligence's 'Black Box' Is Nothing to Fear.” The New York Times. Publisher's VersionAbstract

Alongside the excitement and hype about our growing reliance on artificial intelligence, there’s fear about the way the technology works. A recent MIT Technology Review article titled “The Dark Secret at the Heart of AI” warned: “No one really knows how the most advanced algorithms do what they do. That could be a problem.” Thanks to this uncertainty and lack of accountability, a report by the AI Now Instituterecommended that public agencies responsible for criminal justice, health care, welfare and education shouldn’t use such technology.

Given these types of concerns, the unseeable space between where data goes in and answers come out is often referred to as a “black box” — seemingly a reference to the hardy (and in fact orange, not black) data recorders mandated on aircraft and often examined after accidents. In the context of A.I., the term more broadly suggests an image of being in the “dark” about how the technology works: We put in and provide the data and models and architectures, and then computers provide us answers while continuing to learn on their own, in a way that’s seemingly impossible — and certainly too complicated — for us to understand.

Read More.

Inside Google’s culture of relentless self-surveying
Tim Fernholz. 6/26/2013. “Inside Google’s culture of relentless self-surveying.” Quartz. Publisher's VersionAbstract

When Google recently admitted that the baffling brainteasers it posed to interviewees were utterly useless at predicting which ones would make good employees, it was another example of the power of what Google calls “people analytics”—the mixing of Big Data with management science to come up with smarter ways to work.

The company’s obsession with human data is perhaps best known for producing the rule that no employee should sit more than 150 feet (46 meters) away from a micro-kitchen, and that in those kitchens the chocolate M&Ms be kept in opaque jars while healthier food is in clear containers, to encourage healthy eating habits. Google’s often controversial culture of omniscience about its users is mirrored, inside its posh campuses, by a team of industrial-organizational psychologists, behavioral economists, consultants and statisticians who survey and experiment with Google’s staff.

Read More.

A.I. as Talent Scout: Unorthodox Hires, and Maybe Lower Pay
Noam Scheiber. 12/6/2018. “A.I. as Talent Scout: Unorthodox Hires, and Maybe Lower Pay.” The New York Times. Publisher's VersionAbstract

One day this fall, Ashutosh Garg, the chief executive of a recruiting service called Eightfold.ai, turned up a résumé that piqued his interest.

It belonged to a prospective data scientist, someone who unearths patterns in data to help businesses make decisions, like how to target ads. But curiously, the résumé featured the term “data science” nowhere.

Instead, the résumé belonged to an analyst at Barclays who had done graduate work in physics at the University of California, Los Angeles. Though his profile on the social network LinkedIn indicated that he had never worked as a data scientist, Eightfold’s software flagged him as a good fit. He was similar in certain key ways, like his math and computer chops, to four actual data scientists whom Mr. Garg had instructed the software to consider as a model.

The idea is not to focus on job titles, but “what skills they have,” Mr. Garg said. “You’re really looking for people who have not done it, but can do it.”

Read More.

More

Meet Your New Boss: An Algorithm

Meet Your New Boss: An Algorithm

A.I. as Talent Scout: Unorthodox Hires, and Maybe Lower Pay

A.I. as Talent Scout: Unorthodox Hires, and Maybe Lower Pay

The Performance Management Revolution

Performance Management

Amazon scrapped 'sexist AI' tool

Amazon AI

Making it easier to discover datasets

Google AI

HR Must Make People Analytics More User-Friendly

HR Must Make People Analytics More User-Friendly

More Harvard Business Review

Competing on Talent Analytics
Thomas H. Davenport, Jeanne Harris, and Jeremy Shapiro. 10/2010. “Competing on Talent Analytics.” Harvard Business Review. Publisher's VersionAbstract
Do you think you know how to get the best from your people? Or do you know? How do investments in your employees actually affect workforce performance? Who are your top performers? How can you empower and motivate other employees to excel?

Leading-edge companies are increasingly adopting sophisticated methods of analyzing employee data to enhance their competitive advantage. Google, Best Buy, Sysco, and others are beginning to understand exactly how to ensure the highest productivity, engagement, and retention of top talent, and then replicating their successes. If you want better performance from your top employees—who are perhaps your greatest asset and your largest expense—you’ll do well to favor analytics over your gut instincts.

Harrah’s Entertainment is well-known for employing analytics to select customers with the greatest profit potential and to refine pricing and promotions for targeted segments. (See “Competing on Analytics,”HBR January 2006.) Harrah’s has also extended this approach to people decisions, using insights derived from data to put the right employees in the right jobs and creating models that calculate the optimal number of staff members to deal with customers at the front desk and other service points. Today the company uses analytics to hold itself accountable for the things that matter most to its staff, knowing that happier and healthier employees create better-satisfied guests.

Read More.

Reinventing Talent Management: How GE Uses Analytics to Guide a More Digital, Far-Flung Workforce
Steven Prokesch. 9/2017. “Reinventing Talent Management: How GE Uses Analytics to Guide a More Digital, Far-Flung Workforce.” Harvard Business Review. Publisher's VersionAbstract

During Jeff Immelt’s 16 years as CEO, GE radically changed its mix of businesses and its strategy.

Its focus—becoming a truly global, technology-driven industrial company that’s blazing the path for the internet of things—has had dramatic implications for the profile of its workforce. Currently, 50% of GE’s 300,000 employees have been with the company for five years or less, meaning that they may lack the personal networks needed to succeed and get ahead. The skills of GE’s workforce have been rapidly changing as well, largely because of the company’s ongoing transformation into a state-of-the-art digital industrial organization that excels at analytics. The good news is that GE has managed to attract thousands of digerati. The bad news is that they have little tolerance for the bureaucracy of a conventional multinational. As is the case with younger workers in general, they want to be in charge of their own careers and don’t want to depend solely on their bosses or HR to identify opportunities and figure out the training and experiences needed to pursue their professional goals.

What’s the solution to these challenges? GE hopes it’s HR analytics. “We need a set of complementary technologies that can take a company that’s in 180 countries around the world and make it small,” says James Gallman, who until recently was the GE executive responsible for people analytics and planning. The technologies he’s referring to are a set of self-service applications available to employees, leaders, and HR. All the apps are based on a generic matching algorithm built by data scientists at GE’s Global Research Center in conjunction with HR. “It’s GE’s version of Match.com,” quips Gallman. “It can take a person and match him or her to something else: online or conventional educational programs, another person, or a job.”

Read More.

How People Analytics Can Help You Change Process, Culture, and Strategy
Chantrelle Nielsen and Natalie McCullough. 5/17/2018. “How People Analytics Can Help You Change Process, Culture, and Strategy.” Harvard Business Review. Publisher's VersionAbstract

It seems like every business is struggling with the concept of transformation. Large incumbents are trying to keep pace with digital upstarts., and even digital native companies born as disruptors know that they need to transform. Take Uber: at only eight years old, it’s already upended the business model of taxis. Now it’s trying to move from a software platform to a robotics lab to build self-driving cars.

And while the number of initiatives that fall under the umbrella of “transformation” is so broad that it can seem meaningless, this breadth is actually one of the defining characteristic that differentiates transformation from ordinary change. A transformation is a whole portfolio of change initiatives that together form an integrated program.

And so a transformation is a system of systems, all made up of the most complex system of all — people. For this reason, organizational transformation is uniquely suited to the analysis, prediction, and experimental research approach of the people analytics field.

People analytics — defined as the use of data about human behavior, relationships and traits to make business decisions — helps to replace decision making based on anecdotal experience, hierarchy and risk avoidance with higher-quality decisions based on data analysis, prediction, and experimental research. In working with several dozen Fortune 500 companies with Microsoft’s Workplace Analytics division, we’ve observed companies using people analytics in three main ways to help understand and drive their transformation efforts.

Read More.

More