Digital computers have transformed work in almost every sector of the economy over the past several decades (1). We are now at the beginning of an even larger and more rapid transformation due to recent advances in machine learning (ML), which is capable of accelerating the pace of automation itself. However, although it is clear that ML is a “general purpose technology,” like the steam engine and electricity, which spawns a plethora of additional innovations and capabilities (2), there is no widely shared agreement on the tasks where ML systems excel, and thus little agreement on the specific expected impacts on the workforce and on the economy more broadly. We discuss what we see to be key implications for the workforce, drawing on our rubric of what the current generation of ML systems can and cannot do [see the supplementary materials (SM)]. Although parts of many jobs may be “suitable for ML” (SML), other tasks within these same jobs do not fit the criteria for ML well; hence, effects on employment are more complex than the simple replacement and substitution story emphasized by some. Although economic effects of ML are relatively limited today, and we are not facing the imminent “end of work” as is sometimes proclaimed, the implications for the economy and the workforce going forward are profound.
Researchers tested the effects of including cues, anchors, and savings goals in a company email encouraging employee contributions to their 401(k).
Researchers found that providing high contribution rate or savings goal examples, or highlighting high savings thresholds created by the 401(k) plan rules, increased 401(k) contribution rates by 1-2% of income per pay period.
Although evidence-based algorithms consistently outperform human forecasters, people often fail to use them after learning that they are imperfect, a phenomenon known as algorithm aversion. In this paper, we present three studies investigating how to reduce algorithm aversion. In incentivized forecasting tasks, participants chose between using their own forecasts or those of an algorithm that was built by experts. Participants were considerably more likely to choose to use an imperfect algorithm when they could modify its forecasts, and they performed better as a result. Notably, the preference for modifiable algorithms held even when participants were severely restricted in the modifications they could make (Studies 1-3). In fact, our results suggest that participants’ preference for modifiable algorithms was indicative of a desire for some control over the forecasting outcome, and not for a desire for greater control over the forecasting outcome, as participants’ preference for modifiable algorithms was relatively insensitive to the magnitude of the modifications they were able to make (Study 2). Additionally, we found that giving participants the freedom to modify an imperfect algorithm made them feel more satisfied with the forecasting process, more likely to believe that the algorithm was superior, and more likely to choose to use an algorithm to make subsequent forecasts (Study 3). This research suggests that one can reduce algorithm aversion by giving people some control - even a slight amount - over an imperfect algorithm’s forecast.
More than a quarter century ago, organizational scholars began to explore the implications of prosociality in organizations. Three interrelated streams have emerged from this work, which focus on prosocial motives (the desire to benefit others or expend effort out of concern for others), prosocial behaviors (acts that promote/protect the welfare of individuals, groups, or organizations), and prosocial impact (the experience of making a positive difference in the lives of others through one’s work). Prior studies have highlighted the importance of prosocial motives, behaviors, and impact, and have enhanced our understanding of each of them. However, there has been little effort to systematically review and integrate these related lines of work in a way that furthers our understanding of prosociality in organizations. In this article, we provide an overview of the current state of the literature, highlight key findings, identify major research themes, and address important controversies and debates. We call for an expanded view of prosocial behavior and a sharper focus on the costs and unintended consequences of prosocial phenomena. We conclude by suggesting a number of avenues for future research that will address unanswered questions and should provide a more complete understanding of prosociality in the workplace.
Employers are monitoring their workers more often and using more tracking tools than ever. What's surprising is that a growing number of employees don't mind.
Advancements in technologies―including sensors, mobile devices, wireless communications, data analytics and biometrics―are rapidly expanding monitoring capabilities and reducing the cost of surveillance, and that's prompting more employers to use these tools.
In 2015, about 30 percent of large employers were monitoring employees in nontraditional ways, such as analyzing e-mail text, logging computer usage or tracking employee movements, says Brian Kropp, group vice president of HR practice for Gartner, a research and advisory firm. By 2018, that number had jumped to 46 percent, and Gartner projects it will reach well over 50 percent this year.
Alongside the excitement and hype about our growing reliance on artificial intelligence, there’s fear about the way the technology works. A recent MIT Technology Review article titled “The Dark Secret at the Heart of AI” warned: “No one really knows how the most advanced algorithms do what they do. That could be a problem.” Thanks to this uncertainty and lack of accountability, a report by the AI Now Instituterecommended that public agencies responsible for criminal justice, health care, welfare and education shouldn’t use such technology.
Given these types of concerns, the unseeable space between where data goes in and answers come out is often referred to as a “black box” — seemingly a reference to the hardy (and in fact orange, not black) data recorders mandated on aircraft and often examined after accidents. In the context of A.I., the term more broadly suggests an image of being in the “dark” about how the technology works: We put in and provide the data and models and architectures, and then computers provide us answers while continuing to learn on their own, in a way that’s seemingly impossible — and certainly too complicated — for us to understand.
An algorithm that was being tested as a recruitment tool by online giant Amazon was sexist and had to be scrapped, according to a Reuters report. The artificial intelligence system was trained on data submitted by applicants over a 10-year period, much of which came from men, it claimed.
Reuters was told by members of the team working on it that the system effectively taught itself that male candidates were preferable. Amazon has not responded to the claims.
Reuters spoke to five members of the team who developed the machine learning tool in 2014, none of whom wanted to be publicly named. They told Reuters that the system was intended to review job applications and give candidates a score ranging from one to five stars.
"They literally wanted it to be an engine where I'm going to give you 100 resumes, it will spit out the top five, and we'll hire those," said one of the engineers who spoke to Reuters.
Walk up a set of steep stairs next to a vegan Chinese restaurant in Palo Alto in Silicon Valley, and you will see the future of work, or at least one version of it. This is the local office of Humanyze, a firm that provides “people analytics”. It counts several Fortune 500 companies among its clients (though it will not say who they are). Its employees mill around an office full of sunlight and computers, as well as beacons that track their location and interactions. Everyone is wearing an ID badge the size of a credit card and the depth of a book of matches. It contains a microphone that picks up whether they are talking to one another; Bluetooth and infrared sensors to monitor where they are; and an accelerometer to record when they move.
“Every aspect of business is becoming more data-driven. There’s no reason the people side of business shouldn’t be the same,” says Ben Waber, Humanyze’s boss. The company’s staff are treated much the same way as its clients. Data from their employees’ badges are integrated with information from their e-mail and calendars to form a full picture of how they spend their time at work. Clients get to see only team-level statistics, but Humanyze’s employees can look at their own data, which include metrics such as time spent with people of the same sex, activity levels and the ratio of time spent speaking versus listening.
Managing HR-related data is critical to any organization’s success. And yet progress in HR analytics has been glacially slow. Consulting firms in the U.S. and Europe lament the slow progress. But a Harvard Business Review analytics study of 230 executives suggests a stunning rate of anticipated progress: 15% said they use “predictive analytics based on HR data and data from other sources within or outside the organization,” while 48% predicted they would be doing so in two years. The reality seems less impressive, as a global IBM survey of more than 1,700 CEOs found that 71% identified human capital as a key source of competitive advantage, yet a global study by Tata Consultancy Services showed that only 5% of big-data investments were in human resources.
Recently, my colleague Wayne Cascio and I took up the question of why HR analytics progress has been so slow despite many decades of research and practical tool building, an exponential increase in available HR data, and consistent evidence that improved HR and talent management leads to stronger organizational performance. Our article in the Journal of Organizational Effectiveness: People and Performance discusses factors that can effectively “push” HR measures and analysis to audiences in a more impactful way, as well as factors that can effectively lead others to “pull” that data for analysis throughout the organization.
It seems like every business is struggling with the concept of transformation. Large incumbents are trying to keep pace with digital upstarts., and even digital native companies born as disruptors know that they need to transform. Take Uber: at only eight years old, it’s already upended the business model of taxis. Now it’s trying to move from a software platform to a robotics lab to build self-driving cars.
And while the number of initiatives that fall under the umbrella of “transformation” is so broad that it can seem meaningless, this breadth is actually one of the defining characteristic that differentiates transformation from ordinary change. A transformation is a whole portfolio of change initiatives that together form an integrated program.
And so a transformation is a system of systems, all made up of the most complex system of all — people. For this reason, organizational transformation is uniquely suited to the analysis, prediction, and experimental research approach of the people analytics field.
People analytics — defined as the use of data about human behavior, relationships and traits to make business decisions — helps to replace decision making based on anecdotal experience, hierarchy and risk avoidance with higher-quality decisions based on data analysis, prediction, and experimental research. In working with several dozen Fortune 500 companies with Microsoft’s Workplace Analytics division, we’ve observed companies using people analytics in three main ways to help understand and drive their transformation efforts.
Do you think you know how to get the best from your people? Or do you know? How do investments in your employees actually affect workforce performance? Who are your top performers? How can you empower and motivate other employees to excel?
Leading-edge companies are increasingly adopting sophisticated methods of analyzing employee data to enhance their competitive advantage. Google, Best Buy, Sysco, and others are beginning to understand exactly how to ensure the highest productivity, engagement, and retention of top talent, and then replicating their successes. If you want better performance from your top employees—who are perhaps your greatest asset and your largest expense—you’ll do well to favor analytics over your gut instincts.
Harrah’s Entertainment is well-known for employing analytics to select customers with the greatest profit potential and to refine pricing and promotions for targeted segments. (See “Competing on Analytics,”HBR January 2006.) Harrah’s has also extended this approach to people decisions, using insights derived from data to put the right employees in the right jobs and creating models that calculate the optimal number of staff members to deal with customers at the front desk and other service points. Today the company uses analytics to hold itself accountable for the things that matter most to its staff, knowing that happier and healthier employees create better-satisfied guests.