Advancements in technologies―including sensors, mobile devices, wireless communications, data analytics and biometrics―are rapidly expanding monitoring capabilities and reducing the cost of surveillance, and that's prompting more employers to use these tools.
In 2015, about 30 percent of large employers were monitoring employees in nontraditional ways, such as analyzing e-mail text, logging computer usage or tracking employee movements, says Brian Kropp, group vice president of HR practice for Gartner, a research and advisory firm. By 2018, that number had jumped to 46 percent, and Gartner projects it will reach well over 50 percent this year.
Amazon admitted this week that it experimented with using machine learning to build a recruitment tool. The trouble is, it didn't exactly produce fantastic results and it was later abandoned.
According to Reuters, Amazon engineers found that besides churning out totally unsuitable candidates, the so-called AI project showed a bias against women.
To Oxford University researcher Dr Sandra Wachter, the news that an artificially intelligent system had taught itself to discriminate against women was nothing new.
Read article here.
Unconscious biases are created and reinforced by our environments and experiences. Our mind is constantly processing information, oftentimes without our conscious awareness. When we are moving fast or lack all the data, our unconscious biases fill in the gaps, influencing everything from product decisions to our interactions with coworkers. There is a growing body of research – led by scientists at Google – surrounding unconscious bias and how we can prevent it from negatively impacting our decision making.
Watch the youtube video here.
On the day I met Brett Ostrum, in a conference room in Redmond, Wash., he was wearing a black leather jacket and a neat goatee, and his laptop was covered with stickers that made it appear you could glimpse its electronic innards. That was logical enough, because those circuits were his responsibility: He was the corporate vice president at Microsoft in charge of the company’s computing devices, most notably Xbox and the Surface line of laptops and tablets.
It was early 2018, and things were going pretty well for him. Despite Microsoft’s lineage as a software company, and as a brand not exactly synonymous with good design, it was making the most of its late start in the hardware business. Mr. Ostrum and his team were winning market share and high marks from critics.
But he saw a problem on the horizon. It came in the form of extensive surveys Microsoft used to monitor employees’ attitudes. Mr. Ostrum’s business unit scored average or above average on most measures — except one. Employees reported being much less satisfied with their work-life balance than their counterparts elsewhere at the company.
Until that day arrives, Grosz, the Higgins Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), is working to instill in the next generation of computer scientists a mindset that considers the societal impact of their work, and the ethical reasoning and communications skills to do so.
“Ethics permeates the design of almost every computer system or algorithm that’s going out in the world,” Grosz said. “We want to educate our students to think not only about what systems they could build, but whether they shouldbuild those systems and how they should design those systems.”
At a time when computer science departments around the country are grappling with how to turn out graduates who understand ethics as well as algorithms, Harvard is taking a novel approach.
Organizational Network Analysis (ONA) is the set of scientific methods and theories to help understand interactions within an organization. It helps executives and managers to intervene at critical times, increase performance, and reduce costs.
There’s increasing pressure on executives to drive sustained, long-term growth. Yet, they lack the information they need to make informed business decisions and successfully initiate change. As organizations restructure departments to have fewer hierarchical levels, work increasingly occurs between social networks, rather than though prescribed reporting structures. Research shows that employees look to their networks to find information and to solve problems. Communication no longer flows solely from senior management to individual contributors – information moves through social networks, between colleagues and different teams. Organizations can analyze social networks to assess how information flows between teams and to intervene at critical times in order to improve how work gets done.
– Explore the benefits of supporting organizational networks
– How network analysis can impact company performance
– How to interpret network graphs
– Business applications of ONA for human resources, business processes, and corporate real estate decisions
ON MARCH 18, 2018, at around 10 P.M., Elaine Herzberg was wheeling her bicycle across a street in Tempe, Arizona, when she was struck and killed by a self-driving car. Although there was a human operator behind the wheel, an autonomous system—artificial intelligence—was in full control. This incident, like others involving interactions between people and AI technologies, raises a host of ethical and proto-legal questions. What moral obligations did the system’s programmers have to prevent their creation from taking a human life? And who was responsible for Herzberg’s death? The person in the driver’s seat? The company testing the car’s capabilities? The designers of the AI system, or even the manufacturers of its onboard sensory equipment?
“Artificial intelligence” refers to systems that can be designed to take cues from their environment and, based on those inputs, proceed to solve problems, assess risks, make predictions, and take actions. In the era predating powerful computers and big data, such systems were programmed by humans and followed rules of human invention, but advances in technology have led to the development of new approaches. One of these is machine learning, now the most active area of AI, in which statistical methods allow a system to “learn” from data, and make decisions, without being explicitly programmed. Such systems pair an algorithm, or series of steps for solving a problem, with a knowledge base or stream—the information that the algorithm uses to construct a model of the world.
Ethical concerns about these advances focus at one extreme on the use of AI in deadly military drones, or on the risk that AI could take down global financial systems. Closer to home, AI has spurred anxiety about unemployment, as autonomous systems threaten to replace millions of truck drivers, and make Lyft and Uber obsolete. And beyond these larger social and economic considerations, data scientists have real concerns about bias, about ethical implementations of the technology, and about the nature of interactions between AI systems and humans if these systems are to be deployed properly and fairly in even the most mundane applications.
Consider a prosaic-seeming social change: machines are already being given the power to make life-altering, everyday decisions about people. Artificial intelligence can aggregate and assess vast quantities of data that are sometimes beyond human capacity to analyze unaided, thereby enabling AI to make hiring recommendations, determine in seconds the creditworthiness of loan applicants, and predict the chances that criminals will re-offend.
But such applications raise troubling ethical issues because AI systems can reinforce what they have learned from real-world data, even amplifying familiar risks, such as racial or gender bias. Systems can also make errors of judgment when confronted with unfamiliar scenarios. And because many such systems are “black boxes,” the reasons for their decisions are not easily accessed or understood by humans—and therefore difficult to question, or probe.
One day this fall, Ashutosh Garg, the chief executive of a recruiting service called Eightfold.ai, turned up a résumé that piqued his interest.
It belonged to a prospective data scientist, someone who unearths patterns in data to help businesses make decisions, like how to target ads. But curiously, the résumé featured the term “data science” nowhere.
Instead, the résumé belonged to an analyst at Barclays who had done graduate work in physics at the University of California, Los Angeles. Though his profile on the social network LinkedIn indicated that he had never worked as a data scientist, Eightfold’s software flagged him as a good fit. He was similar in certain key ways, like his math and computer chops, to four actual data scientists whom Mr. Garg had instructed the software to consider as a model.
The idea is not to focus on job titles, but “what skills they have,” Mr. Garg said. “You’re really looking for people who have not done it, but can do it.”
And surveys are fine, says Dayna Weintraub, director of student-affairs research and assessment at Rutgers University at New Brunswick. But she also recognizes their drawbacks: poor response rates, underrepresentation of particular demographic groups, and, in certain instances, answers that lack needed candor.
And so, to assess and change student conduct in a more effective way, Weintraub and her colleagues have tried a new approach: find existing, direct, and detailed data on how Rutgers students conduct themselves, and combine them.
Leading the effort was Kevin Pitt, director of student conduct at the New Jersey university. Working alongside Weintraub, he and his team analyzed, with granular specificity, the behavior patterns of students in a variety of contexts: consuming excessive alcohol or drugs, in questionable sexual situations, and others. Pitt and his team examined student-level trends within those areas, combining a variety of previously siloed databases to sketch a more-informative picture of student life at Rutgers.
An algorithm that was being tested as a recruitment tool by online giant Amazon was sexist and had to be scrapped, according to a Reuters report. The artificial intelligence system was trained on data submitted by applicants over a 10-year period, much of which came from men, it claimed.
Reuters was told by members of the team working on it that the system effectively taught itself that male candidates were preferable. Amazon has not responded to the claims.
Reuters spoke to five members of the team who developed the machine learning tool in 2014, none of whom wanted to be publicly named. They told Reuters that the system was intended to review job applications and give candidates a score ranging from one to five stars.
"They literally wanted it to be an engine where I'm going to give you 100 resumes, it will spit out the top five, and we'll hire those," said one of the engineers who spoke to Reuters.
In today's world, scientists in many disciplines and a growing number of journalists live and breathe data. There are many thousands of data repositories on the web, providing access to millions of datasets; and local and national governments around the world publish their data as well. To enable easy access to this data, we launched Dataset Search, so that scientists, data journalists, data geeks, or anyone else can find the datarequired for their work and their stories, or simply to satisfy their intellectual curiosity.
Similar to how Google Scholar works, Dataset Search lets you find datasets wherever they’re hosted, whether it’s a publisher's site, a digital library, or an author's personal web page. To create Dataset search, we developed guidelines for dataset providers to describe their data in a way that Google (and other search engines) can better understand the content of their pages. These guidelines include salient information about datasets: who created the dataset, when it was published, how the data was collected, what the terms are for using the data, etc. We then collect and link this information, analyze where different versions of the same dataset might be, and find publications that may be describing or discussing the dataset. Our approach is based on an open standard for describing this information (schema.org) and anybody who publishes data can describe their dataset this way. We encourage dataset providers, large and small, to adopt this common standard so that all datasets are part of this robust ecosystem.
In machine learning and deep learning we can’t do anything without data. So the people that create datasets for us to train our models are the (often under-appreciated) heroes. Some of the most useful and important datasets are those that become important “academic baselines”; that is, datasets that are widely studied by researchers and used to compare algorithmic changes. Some of these become household names (at least, among households that train models!), such as MNIST, CIFAR 10, and Imagenet.
We all owe a debt of gratitude to those kind folks who have made datasets available for the research community. So fast.ai and the AWS Public Dataset Program have teamed up to try to give back a little: we’ve made some of the most important of these datasets available in a single place, using standard formats, on reliable and fast infrastructure. For a full list and links see the fast.ai datasets page.
fast.ai uses these datasets in the Deep Learning for Coders courses, because they provide great examples of the kind of data that students are likely to encounter, and the academic literature has many examples of model results using these datasets which students can compare their work to. If you use any of these datasets in your research, please show your gratitude by citing the original paper (we’ve provided the appropriate citation link below for each), and if you use them as part of a commercial or educational project, consider adding a note of thanks and a link to the dataset.
You want to know which teams are at the forefront of analytics? Just look around at the teams still playing.
Once upon a time, there was the Oakland Athletics and a sacred tome called "Moneyball." It was about baseball teams winning with statistics. Only it wasn't about that at all. It was about market inefficiency. Then John Henry bought the Boston Red Sox, hired Bill James, made Theo Epstein his general manager, and Moneyball spread to a big market.
We're several iterations past all of that. Things move fast in technology, so fast it can even carry a tradition-based industry like baseball into the digital age. These days, every team is playing Moneyball. All of them, as in 30 for 30.
"At this point, I think everyone assumes that their counterpart is smart," Brewers general manager David Stearns said. "And everyone is doing what they can do to unearth competitive advantages." To call it Moneyball is not right, either. Michael Lewis is still turning out ground-breaking work, but to fully capture what is happening in big league front offices, circa 2018, the next inside look at analytics and baseball would need to be authored by someone like the late Stephen Hawking. It's hard to say what you'd call it. "The Singularity" has already been taken.