Digital computers have transformed work in almost every sector of the economy over the past several decades (1). We are now at the beginning of an even larger and more rapid transformation due to recent advances in machine learning (ML), which is capable of accelerating the pace of automation itself. However, although it is clear that ML is a “general purpose technology,” like the steam engine and electricity, which spawns a plethora of additional innovations and capabilities (2), there is no widely shared agreement on the tasks where ML systems excel, and thus little agreement on the specific expected impacts on the workforce and on the economy more broadly. We discuss what we see to be key implications for the workforce, drawing on our rubric of what the current generation of ML systems can and cannot do [see the supplementary materials (SM)]. Although parts of many jobs may be “suitable for ML” (SML), other tasks within these same jobs do not fit the criteria for ML well; hence, effects on employment are more complex than the simple replacement and substitution story emphasized by some. Although economic effects of ML are relatively limited today, and we are not facing the imminent “end of work” as is sometimes proclaimed, the implications for the economy and the workforce going forward are profound.
Researchers tested the effects of including cues, anchors, and savings goals in a company email encouraging employee contributions to their 401(k).
Researchers found that providing high contribution rate or savings goal examples, or highlighting high savings thresholds created by the 401(k) plan rules, increased 401(k) contribution rates by 1-2% of income per pay period.
Although evidence-based algorithms consistently outperform human forecasters, people often fail to use them after learning that they are imperfect, a phenomenon known as algorithm aversion. In this paper, we present three studies investigating how to reduce algorithm aversion. In incentivized forecasting tasks, participants chose between using their own forecasts or those of an algorithm that was built by experts. Participants were considerably more likely to choose to use an imperfect algorithm when they could modify its forecasts, and they performed better as a result. Notably, the preference for modifiable algorithms held even when participants were severely restricted in the modifications they could make (Studies 1-3). In fact, our results suggest that participants’ preference for modifiable algorithms was indicative of a desire for some control over the forecasting outcome, and not for a desire for greater control over the forecasting outcome, as participants’ preference for modifiable algorithms was relatively insensitive to the magnitude of the modifications they were able to make (Study 2). Additionally, we found that giving participants the freedom to modify an imperfect algorithm made them feel more satisfied with the forecasting process, more likely to believe that the algorithm was superior, and more likely to choose to use an algorithm to make subsequent forecasts (Study 3). This research suggests that one can reduce algorithm aversion by giving people some control - even a slight amount - over an imperfect algorithm’s forecast.
More than a quarter century ago, organizational scholars began to explore the implications of prosociality in organizations. Three interrelated streams have emerged from this work, which focus on prosocial motives (the desire to benefit others or expend effort out of concern for others), prosocial behaviors (acts that promote/protect the welfare of individuals, groups, or organizations), and prosocial impact (the experience of making a positive difference in the lives of others through one’s work). Prior studies have highlighted the importance of prosocial motives, behaviors, and impact, and have enhanced our understanding of each of them. However, there has been little effort to systematically review and integrate these related lines of work in a way that furthers our understanding of prosociality in organizations. In this article, we provide an overview of the current state of the literature, highlight key findings, identify major research themes, and address important controversies and debates. We call for an expanded view of prosocial behavior and a sharper focus on the costs and unintended consequences of prosocial phenomena. We conclude by suggesting a number of avenues for future research that will address unanswered questions and should provide a more complete understanding of prosociality in the workplace.
Walk up a set of steep stairs next to a vegan Chinese restaurant in Palo Alto in Silicon Valley, and you will see the future of work, or at least one version of it. This is the local office of Humanyze, a firm that provides “people analytics”. It counts several Fortune 500 companies among its clients (though it will not say who they are). Its employees mill around an office full of sunlight and computers, as well as beacons that track their location and interactions. Everyone is wearing an ID badge the size of a credit card and the depth of a book of matches. It contains a microphone that picks up whether they are talking to one another; Bluetooth and infrared sensors to monitor where they are; and an accelerometer to record when they move.
“Every aspect of business is becoming more data-driven. There’s no reason the people side of business shouldn’t be the same,” says Ben Waber, Humanyze’s boss. The company’s staff are treated much the same way as its clients. Data from their employees’ badges are integrated with information from their e-mail and calendars to form a full picture of how they spend their time at work. Clients get to see only team-level statistics, but Humanyze’s employees can look at their own data, which include metrics such as time spent with people of the same sex, activity levels and the ratio of time spent speaking versus listening.
In machine learning and deep learning we can’t do anything without data. So the people that create datasets for us to train our models are the (often under-appreciated) heroes. Some of the most useful and important datasets are those that become important “academic baselines”; that is, datasets that are widely studied by researchers and used to compare algorithmic changes. Some of these become household names (at least, among households that train models!), such as MNIST, CIFAR 10, and Imagenet.
We all owe a debt of gratitude to those kind folks who have made datasets available for the research community. So fast.ai and the AWS Public Dataset Program have teamed up to try to give back a little: we’ve made some of the most important of these datasets available in a single place, using standard formats, on reliable and fast infrastructure. For a full list and links see the fast.ai datasets page.
fast.ai uses these datasets in the Deep Learning for Coders courses, because they provide great examples of the kind of data that students are likely to encounter, and the academic literature has many examples of model results using these datasets which students can compare their work to. If you use any of these datasets in your research, please show your gratitude by citing the original paper (we’ve provided the appropriate citation link below for each), and if you use them as part of a commercial or educational project, consider adding a note of thanks and a link to the dataset.
Uber Technologies Inc. and other pioneers of the so-called gig economy became some of the world’s most valuable private companies by using apps and algorithms to hand out tasks to an army of self-employed workers. Now, established companies like Royal Dutch Shell PLC and General Electric Co. are adopting elements of that model for the full-time workforce.
Companies say the new tools make them more efficient and give employees more opportunities to do new kinds of work. But the software also is starting to take on management tasks that humans have long handled, such as scheduling and shepherding strategic projects. Researchers say the shift could lead to narrower roles for some managers and displace others.
During Jeff Immelt’s 16 years as CEO, GE radically changed its mix of businesses and its strategy.
Its focus—becoming a truly global, technology-driven industrial company that’s blazing the path for the internet of things—has had dramatic implications for the profile of its workforce. Currently, 50% of GE’s 300,000 employees have been with the company for five years or less, meaning that they may lack the personal networks needed to succeed and get ahead. The skills of GE’s workforce have been rapidly changing as well, largely because of the company’s ongoing transformation into a state-of-the-art digital industrial organization that excels at analytics. The good news is that GE has managed to attract thousands of digerati. The bad news is that they have little tolerance for the bureaucracy of a conventional multinational. As is the case with younger workers in general, they want to be in charge of their own careers and don’t want to depend solely on their bosses or HR to identify opportunities and figure out the training and experiences needed to pursue their professional goals.
What’s the solution to these challenges? GE hopes it’s HR analytics. “We need a set of complementary technologies that can take a company that’s in 180 countries around the world and make it small,” says James Gallman, who until recently was the GE executive responsible for people analytics and planning. The technologies he’s referring to are a set of self-service applications available to employees, leaders, and HR. All the apps are based on a generic matching algorithm built by data scientists at GE’s Global Research Center in conjunction with HR. “It’s GE’s version of Match.com,” quips Gallman. “It can take a person and match him or her to something else: online or conventional educational programs, another person, or a job.”
Too often new collaborative technologies — though intended to connect employees seamlessly and enable work to get done more efficiently — are misused in ways that impede innovation and hurt performance.
Age-old wisdom suggests it is not what but whom you know that matters. Over decades this truism has been supported by a great deal of research on networks. Work since the 1970s shows that people who maintain certain kinds of networks do better: They are promoted more rapidly than their peers, make more money, are more likely to find a job if they lose their own, and are more likely to be considered high performers.
But the secret to these networks has never been their size. Simply following the advice of self-help books and building mammoth Rolodexes or Facebook accounts actually tends to hurt performance as well as have a negative effect on health and well-being at work. Rather, the people who do better tend to have more ties to people who themselves are not connected. People with ties to the less-connected are more likely to hear about ideas that haven’t gotten exposure elsewhere, and are able to piece together opportunities in ways that less-effectively-networked colleagues cannot.
If bigger is not better in networks, what is the actual impact of social media tools in the workforce? The answer: They are as likely to actually hurt performance and engagement as they are to help — if they simply foist more collaborative demands on an already-overloaded workforce. In most places, people are drowning in collaborative demands imposed by meetings, emails, and phone calls. For most of us, these activities consume 75% to 90% of a typical work week and constitute a gauntlet to get to the work we must do. In this context, new collaborative technologies, when not used appropriately, are over-loading us all and diminishing efficiency and innovation at work.
When Brian Jensen told his audience of HR executives that Colorcon wasn’t bothering with annual reviews anymore, they were appalled. This was in 2002, during his tenure as the drugmaker’s head of global human resources. In his presentation at the Wharton School, Jensen explained that Colorcon had found a more effective way of reinforcing desired behaviors and managing performance: Supervisors were giving people instant feedback, tying it to individuals’ own goals, and handing out small weekly bonuses to employees they saw doing good things.
Back then the idea of abandoning the traditional appraisal process—and all that followed from it—seemed heretical. But now, by some estimates, more than one-third of U.S. companies are doing just that. From Silicon Valley to New York, and in offices across the world, firms are replacing annual reviews with frequent, informal check-ins between managers and employees.
How We Got Here
Historical and economic context has played a large role in the evolution of performance management over the decades. When human capital was plentiful, the focus was on which people to let go, which to keep, and which to reward—and for those purposes, traditional appraisals (with their emphasis on individual accountability) worked pretty well. But when talent was in shorter supply, as it is now, developing people became a greater concern—and organizations had to find new ways of meeting that need.