Digital computers have transformed work in almost every sector of the economy over the past several decades (1). We are now at the beginning of an even larger and more rapid transformation due to recent advances in machine learning (ML), which is capable of accelerating the pace of automation itself. However, although it is clear that ML is a “general purpose technology,” like the steam engine and electricity, which spawns a plethora of additional innovations and capabilities (2), there is no widely shared agreement on the tasks where ML systems excel, and thus little agreement on the specific expected impacts on the workforce and on the economy more broadly. We discuss what we see to be key implications for the workforce, drawing on our rubric of what the current generation of ML systems can and cannot do [see the supplementary materials (SM)]. Although parts of many jobs may be “suitable for ML” (SML), other tasks within these same jobs do not fit the criteria for ML well; hence, effects on employment are more complex than the simple replacement and substitution story emphasized by some. Although economic effects of ML are relatively limited today, and we are not facing the imminent “end of work” as is sometimes proclaimed, the implications for the economy and the workforce going forward are profound.
Researchers tested the effects of including cues, anchors, and savings goals in a company email encouraging employee contributions to their 401(k).
Researchers found that providing high contribution rate or savings goal examples, or highlighting high savings thresholds created by the 401(k) plan rules, increased 401(k) contribution rates by 1-2% of income per pay period.
Although evidence-based algorithms consistently outperform human forecasters, people often fail to use them after learning that they are imperfect, a phenomenon known as algorithm aversion. In this paper, we present three studies investigating how to reduce algorithm aversion. In incentivized forecasting tasks, participants chose between using their own forecasts or those of an algorithm that was built by experts. Participants were considerably more likely to choose to use an imperfect algorithm when they could modify its forecasts, and they performed better as a result. Notably, the preference for modifiable algorithms held even when participants were severely restricted in the modifications they could make (Studies 1-3). In fact, our results suggest that participants’ preference for modifiable algorithms was indicative of a desire for some control over the forecasting outcome, and not for a desire for greater control over the forecasting outcome, as participants’ preference for modifiable algorithms was relatively insensitive to the magnitude of the modifications they were able to make (Study 2). Additionally, we found that giving participants the freedom to modify an imperfect algorithm made them feel more satisfied with the forecasting process, more likely to believe that the algorithm was superior, and more likely to choose to use an algorithm to make subsequent forecasts (Study 3). This research suggests that one can reduce algorithm aversion by giving people some control - even a slight amount - over an imperfect algorithm’s forecast.
More than a quarter century ago, organizational scholars began to explore the implications of prosociality in organizations. Three interrelated streams have emerged from this work, which focus on prosocial motives (the desire to benefit others or expend effort out of concern for others), prosocial behaviors (acts that promote/protect the welfare of individuals, groups, or organizations), and prosocial impact (the experience of making a positive difference in the lives of others through one’s work). Prior studies have highlighted the importance of prosocial motives, behaviors, and impact, and have enhanced our understanding of each of them. However, there has been little effort to systematically review and integrate these related lines of work in a way that furthers our understanding of prosociality in organizations. In this article, we provide an overview of the current state of the literature, highlight key findings, identify major research themes, and address important controversies and debates. We call for an expanded view of prosocial behavior and a sharper focus on the costs and unintended consequences of prosocial phenomena. We conclude by suggesting a number of avenues for future research that will address unanswered questions and should provide a more complete understanding of prosociality in the workplace.
Walk up a set of steep stairs next to a vegan Chinese restaurant in Palo Alto in Silicon Valley, and you will see the future of work, or at least one version of it. This is the local office of Humanyze, a firm that provides “people analytics”. It counts several Fortune 500 companies among its clients (though it will not say who they are). Its employees mill around an office full of sunlight and computers, as well as beacons that track their location and interactions. Everyone is wearing an ID badge the size of a credit card and the depth of a book of matches. It contains a microphone that picks up whether they are talking to one another; Bluetooth and infrared sensors to monitor where they are; and an accelerometer to record when they move.
“Every aspect of business is becoming more data-driven. There’s no reason the people side of business shouldn’t be the same,” says Ben Waber, Humanyze’s boss. The company’s staff are treated much the same way as its clients. Data from their employees’ badges are integrated with information from their e-mail and calendars to form a full picture of how they spend their time at work. Clients get to see only team-level statistics, but Humanyze’s employees can look at their own data, which include metrics such as time spent with people of the same sex, activity levels and the ratio of time spent speaking versus listening.
One day this fall, Ashutosh Garg, the chief executive of a recruiting service called Eightfold.ai, turned up a résumé that piqued his interest.
It belonged to a prospective data scientist, someone who unearths patterns in data to help businesses make decisions, like how to target ads. But curiously, the résumé featured the term “data science” nowhere.
Instead, the résumé belonged to an analyst at Barclays who had done graduate work in physics at the University of California, Los Angeles. Though his profile on the social network LinkedIn indicated that he had never worked as a data scientist, Eightfold’s software flagged him as a good fit. He was similar in certain key ways, like his math and computer chops, to four actual data scientists whom Mr. Garg had instructed the software to consider as a model.
The idea is not to focus on job titles, but “what skills they have,” Mr. Garg said. “You’re really looking for people who have not done it, but can do it.”
The University of Arizona is tracking freshman students’ ID card swipes to anticipate which students are more likely to drop out. University researchers hope to use the data to lower dropout rates. (Dropping out refers to those who have left higher-education entirely and those who transfer to other colleges.)
The card data tells researchers how frequently a student has entered a residence hall, library, and the student recreation center, which includes a salon, convenience store, mail room, and movie theater. The cards are also used for buying vending machine snacks and more, putting the total number of locations near 700. There’s a sensor embedded in the CatCard student IDs, which are given to every student attending the university.
“By getting their digital traces, you can explore their patterns of movement, behavior and interactions, and that tells you a great deal about them,” Sudha Ram, a professor of management information systems who directs the initiative, said in a press release.
On the day I met Brett Ostrum, in a conference room in Redmond, Wash., he was wearing a black leather jacket and a neat goatee, and his laptop was covered with stickers that made it appear you could glimpse its electronic innards. That was logical enough, because those circuits were his responsibility: He was the corporate vice president at Microsoft in charge of the company’s computing devices, most notably Xbox and the Surface line of laptops and tablets.
It was early 2018, and things were going pretty well for him. Despite Microsoft’s lineage as a software company, and as a brand not exactly synonymous with good design, it was making the most of its late start in the hardware business. Mr. Ostrum and his team were winning market share and high marks from critics.
But he saw a problem on the horizon. It came in the form of extensive surveys Microsoft used to monitor employees’ attitudes. Mr. Ostrum’s business unit scored average or above average on most measures — except one. Employees reported being much less satisfied with their work-life balance than their counterparts elsewhere at the company.
The modern workplace is awash in meetings, many of which are terrible. As a result, people mostly hate going to meetings. The problem is this: The whole point of meetings is to have discussions that you can’t have any other way. And yet most meetings are devoid of real debate.
To improve the meetings you run, and save the meetings you’re invited to, focus on making the discussion more robust.
When teams have a good fight during meetings, team members debate the issues, consider alternatives, challenge one another, listen to minority views, and scrutinize assumptions. Every participant can speak up without fear of retribution. However, many people shy away from such conflict, conflating disagreement and debate with personal attacks. In reality, this sort of friction produces the best decisions. In my recent study of 5,000 managers and employees, published in my recent book, I found that the best performers are really good at generating rigorous discussions in team meetings. (The sample includes senior and junior managers and individual contributors from a range of industries in corporate America; my aim was to statistically identify work habits that correlate with higher performance.)
So how do you lead a good fight in meetings? Here are six practical tips:
Without models, making sense of data is hard. Data helps describe reality, albeit imperfectly. On its own, though, data can’t recommend one decision over another. If you notice that your best-performing teams are also your most diverse, that may be interesting. But to turn that data point into insight, you need to plug it into some model of the world — for instance, you may hypothesize that having a greater variety of perspectives on a team leads to better decision-making. Your hypothesis represents a model of the world.
Though single models can perform well, ensembles of models work even better. That is why the best thinkers, the most accurate predictors, and the most effective design teams use ensembles of models. They are what I call, many-model thinkers.