Digital computers have transformed work in almost every sector of the economy over the past several decades (1). We are now at the beginning of an even larger and more rapid transformation due to recent advances in machine learning (ML), which is capable of accelerating the pace of automation itself. However, although it is clear that ML is a “general purpose technology,” like the steam engine and electricity, which spawns a plethora of additional innovations and capabilities (2), there is no widely shared agreement on the tasks where ML systems excel, and thus little agreement on the specific expected impacts on the workforce and on the economy more broadly. We discuss what we see to be key implications for the workforce, drawing on our rubric of what the current generation of ML systems can and cannot do [see the supplementary materials (SM)]. Although parts of many jobs may be “suitable for ML” (SML), other tasks within these same jobs do not fit the criteria for ML well; hence, effects on employment are more complex than the simple replacement and substitution story emphasized by some. Although economic effects of ML are relatively limited today, and we are not facing the imminent “end of work” as is sometimes proclaimed, the implications for the economy and the workforce going forward are profound.
Researchers tested the effects of including cues, anchors, and savings goals in a company email encouraging employee contributions to their 401(k).
Researchers found that providing high contribution rate or savings goal examples, or highlighting high savings thresholds created by the 401(k) plan rules, increased 401(k) contribution rates by 1-2% of income per pay period.
Although evidence-based algorithms consistently outperform human forecasters, people often fail to use them after learning that they are imperfect, a phenomenon known as algorithm aversion. In this paper, we present three studies investigating how to reduce algorithm aversion. In incentivized forecasting tasks, participants chose between using their own forecasts or those of an algorithm that was built by experts. Participants were considerably more likely to choose to use an imperfect algorithm when they could modify its forecasts, and they performed better as a result. Notably, the preference for modifiable algorithms held even when participants were severely restricted in the modifications they could make (Studies 1-3). In fact, our results suggest that participants’ preference for modifiable algorithms was indicative of a desire for some control over the forecasting outcome, and not for a desire for greater control over the forecasting outcome, as participants’ preference for modifiable algorithms was relatively insensitive to the magnitude of the modifications they were able to make (Study 2). Additionally, we found that giving participants the freedom to modify an imperfect algorithm made them feel more satisfied with the forecasting process, more likely to believe that the algorithm was superior, and more likely to choose to use an algorithm to make subsequent forecasts (Study 3). This research suggests that one can reduce algorithm aversion by giving people some control - even a slight amount - over an imperfect algorithm’s forecast.
More than a quarter century ago, organizational scholars began to explore the implications of prosociality in organizations. Three interrelated streams have emerged from this work, which focus on prosocial motives (the desire to benefit others or expend effort out of concern for others), prosocial behaviors (acts that promote/protect the welfare of individuals, groups, or organizations), and prosocial impact (the experience of making a positive difference in the lives of others through one’s work). Prior studies have highlighted the importance of prosocial motives, behaviors, and impact, and have enhanced our understanding of each of them. However, there has been little effort to systematically review and integrate these related lines of work in a way that furthers our understanding of prosociality in organizations. In this article, we provide an overview of the current state of the literature, highlight key findings, identify major research themes, and address important controversies and debates. We call for an expanded view of prosocial behavior and a sharper focus on the costs and unintended consequences of prosocial phenomena. We conclude by suggesting a number of avenues for future research that will address unanswered questions and should provide a more complete understanding of prosociality in the workplace.
ON MARCH 18, 2018, at around 10 P.M., Elaine Herzberg was wheeling her bicycle across a street in Tempe, Arizona, when she was struck and killed by a self-driving car. Although there was a human operator behind the wheel, an autonomous system—artificial intelligence—was in full control. This incident, like others involving interactions between people and AI technologies, raises a host of ethical and proto-legal questions. What moral obligations did the system’s programmers have to prevent their creation from taking a human life? And who was responsible for Herzberg’s death? The person in the driver’s seat? The company testing the car’s capabilities? The designers of the AI system, or even the manufacturers of its onboard sensory equipment?
“Artificial intelligence” refers to systems that can be designed to take cues from their environment and, based on those inputs, proceed to solve problems, assess risks, make predictions, and take actions. In the era predating powerful computers and big data, such systems were programmed by humans and followed rules of human invention, but advances in technology have led to the development of new approaches. One of these is machine learning, now the most active area of AI, in which statistical methods allow a system to “learn” from data, and make decisions, without being explicitly programmed. Such systems pair an algorithm, or series of steps for solving a problem, with a knowledge base or stream—the information that the algorithm uses to construct a model of the world.
Ethical concerns about these advances focus at one extreme on the use of AI in deadly military drones, or on the risk that AI could take down global financial systems. Closer to home, AI has spurred anxiety about unemployment, as autonomous systems threaten to replace millions of truck drivers, and make Lyft and Uber obsolete. And beyond these larger social and economic considerations, data scientists have real concerns about bias, about ethical implementations of the technology, and about the nature of interactions between AI systems and humans if these systems are to be deployed properly and fairly in even the most mundane applications.
Consider a prosaic-seeming social change: machines are already being given the power to make life-altering, everyday decisions about people. Artificial intelligence can aggregate and assess vast quantities of data that are sometimes beyond human capacity to analyze unaided, thereby enabling AI to make hiring recommendations, determine in seconds the creditworthiness of loan applicants, and predict the chances that criminals will re-offend.
But such applications raise troubling ethical issues because AI systems can reinforce what they have learned from real-world data, even amplifying familiar risks, such as racial or gender bias. Systems can also make errors of judgment when confronted with unfamiliar scenarios. And because many such systems are “black boxes,” the reasons for their decisions are not easily accessed or understood by humans—and therefore difficult to question, or probe.
The most illuminating moment of the Eagles’ enchanted season was a Week 3 play ridiculed in Philadelphia but celebrated here by a small cadre of people who recognized its significance almost immediately.
What fueled the excitement among members of the EdjSports crew was not the outcome of the play — a 6-yard sack of Carson Wentz on fourth-and-8 that gifted the Giants good field position — but rather the call itself. Leading by 7-0 on the Giants’ 43-yard line a few minutes before halftime, the Eagles opted not to punt.
By keeping Philadelphia’s offense on the field in a situation almost always played safe in the risk-averse N.F.L., Coach Doug Pederson did not buck conventional wisdom so much as roll his eyes at it.
An intern at EdjSports, responding to a flurry of text messages from his colleagues about the play, ran the numbers at home. The Eagles, by going for it, improved their probability of winning by 0.5 percent. Defending his decision (again) at a news conference the next day, Pederson cited that exact statistic.
Using examples from vacations to colonoscopies, Nobel laureate and founder of behavioral economics Daniel Kahneman reveals how our "experiencing selves" and our "remembering selves" perceive happiness differently. This new insight has profound implications for economics, public policy -- and our own self-awareness.
Employers are monitoring their workers more often and using more tracking tools than ever. What's surprising is that a growing number of employees don't mind.
Advancements in technologies―including sensors, mobile devices, wireless communications, data analytics and biometrics―are rapidly expanding monitoring capabilities and reducing the cost of surveillance, and that's prompting more employers to use these tools.
In 2015, about 30 percent of large employers were monitoring employees in nontraditional ways, such as analyzing e-mail text, logging computer usage or tracking employee movements, says Brian Kropp, group vice president of HR practice for Gartner, a research and advisory firm. By 2018, that number had jumped to 46 percent, and Gartner projects it will reach well over 50 percent this year.
A business’s culture can catalyze or undermine success. Yet the tools available for measuring it—namely, employee surveys and questionnaires—have significant shortcomings. Employee self-reports are often unreliable. The values and beliefs that people say are important to them, for example, are often not reflected in how they actually behave. Moreover, surveys provide static, or at best episodic, snapshots of organizations that are constantly evolving. And they’re limited by researchers’ tendency to assume that distinctive and idiosyncratic cultures can be neatly categorized into a few common types.
When Brian Jensen told his audience of HR executives that Colorcon wasn’t bothering with annual reviews anymore, they were appalled. This was in 2002, during his tenure as the drugmaker’s head of global human resources. In his presentation at the Wharton School, Jensen explained that Colorcon had found a more effective way of reinforcing desired behaviors and managing performance: Supervisors were giving people instant feedback, tying it to individuals’ own goals, and handing out small weekly bonuses to employees they saw doing good things.
Back then the idea of abandoning the traditional appraisal process—and all that followed from it—seemed heretical. But now, by some estimates, more than one-third of U.S. companies are doing just that. From Silicon Valley to New York, and in offices across the world, firms are replacing annual reviews with frequent, informal check-ins between managers and employees.
How We Got Here
Historical and economic context has played a large role in the evolution of performance management over the decades. When human capital was plentiful, the focus was on which people to let go, which to keep, and which to reward—and for those purposes, traditional appraisals (with their emphasis on individual accountability) worked pretty well. But when talent was in shorter supply, as it is now, developing people became a greater concern—and organizations had to find new ways of meeting that need.
"We have charts and graphs to back us up. So f*** off.” New hires in Google’s people analytics department began receiving a laptop sticker with that slogan a few years ago, when the group probably felt it needed to defend its work. Back then people analytics—using statistical insights from employee data to make talent management decisions—was still a provocative idea with plenty of skeptics who feared it might lead companies to reduce individuals to numbers. HR collected data on workers, but the notion that it could be actively mined to understand and manage them was novel—and suspect.
Today there’s no need for stickers. More than 70% of companies now say they consider people analytics to be a high priority. The field even has celebrated case studies, like Google’s Project Oxygen, which uncovered the practices of the tech giant’s best managers and then used them in coaching sessions to improve the work of low performers. Other examples, such as Dell’s experiments with increasing the success of its sales force, also point to the power of people analytics.
But hype, as it often does, has outpaced reality. The truth is, people analytics has made only modest progress over the past decade. A survey by Tata Consultancy Services found that just 5% of big-data investments go to HR, the group that typically manages people analytics. And a recent study by Deloitte showed that although people analytics has become mainstream, only 9% of companies believe they have a good understanding of which talent dimensions drive performance in their organizations.
What gives? If, as the sticker says, people analytics teams have charts and graphs to back them up, why haven’t results followed? We believe it’s because most rely on a narrow approach to data analysis: They use data only about individual people, when data about the interplay among people is equally or more important.
People’s interactions are the focus of an emerging discipline we call relational analytics. By incorporating it into their people analytics strategies, companies can better identify employees who are capable of helping them achieve their goals, whether for increased innovation, influence, or efficiency. Firms will also gain insight into which key players they can’t afford to lose and where silos exist in their organizations.
Most people analytics teams rely on a narrow approach to data analysis.
Fortunately, the raw material for relational analytics already exists in companies. It’s the data created by e-mail exchanges, chats, and file transfers—the digital exhaust of a company. By mining it, firms can build good relational analytics models.
In this article we present a framework for understanding and applying relational analytics. And we have the charts and graphs to back us up.