With the right use of technology, companies can unlock the potential of their people, opening up more opportunities and pre-empting a kind of “digital determinism”—the idea that tech will determine social structures, cultural values and one’s own experiences. Sixty-eight percent of business leaders told us that, collectively, they have a responsibility to address the ethical and societal challenge of using AI to unintentionally manipulate people’s behavior and choices. To uphold this responsibility, companies need to get creative.

Unlocking opportunities

Despite all the amazing applications and benefits it brings us, technology has its challenges. In the workplace, it is often used to screen people out of jobs by relying on a narrow list of skills, experiences and education—limiting opportunities for those without a gilt-edged resumé or who want to try something new.

While crunching numbers on skills and interests can be beneficial, such scrutiny has left many employees (59 percent of those who participated in our Decoding Organizational DNA survey) concerned that employers will use workforce data to turn them into commodities—an undifferentiated mass.

Interestingly, a new body of research has found that experience and education aren’t especially predictive of performance—and the half-life of skills is diminishing so fast that screening people on specific ones isn’t very useful either.

The good news is that leaders can now use intelligent technology to mine far more accurate predictors of performance, identify hidden skills and match people to jobs they never imagined they could do. They can collect data on people’s core capabilities, like the ability to learn, analyze or collaborate, for example.

Used creatively, technology can help identify latent and adjacent skills, opening up whole new horizons for people. Collecting data that reflects people’s preferences, needs and desires is one way to factor humanity into the math.

Actively reduce biases

While it may seem like computers are thinking for themselves, the fact remains that algorithms are written by people, and sometimes those people can be biased, whether they’re conscious of it or not. For example, one might assume that if you removed the gender and race identifiers of someone’s data, it might eliminate bias from algorithms. But this is not necessarily true, as the data itself could reflect a skewed talent pool or predominant bias that is already present in the workplace or society.

There are solutions, however, and used creatively, technology provides ample opportunity to reduce bias. As an example, the artificial intelligence (AI) startup Pymetrics has developed a bias detection tool called Audit AI that detects bias in algorithms. Originally developed to root out bias in its own algorithms—which are used to determine if a candidate is a good fit for a job—the tool was recently open-sourced by Pymetrics to help others audit the output of virtually any machine learning technique. It determines whether a specific statistic or trait fed into an algorithm is being favored or disadvantaged at a statistically significant systematic rate, leading to adverse impacts on people underrepresented in the dataset.

Including this kind of technology is an important component of a transparent and trust-building approach to data-mining. And the good news is that more companies are developing bias-filters to help.

Grow people, don’t penalize them

Tracking the productivity of employees is nothing new. One might think of stopwatches, which in the early 20th century were used to determine the best way to perform a job. But advances in technology can take this to radical new heights, creating a kind of micromanagement that can make employees feel like their every move is being watched and that they could be penalized.

In some cases, companies now track and share real-time results on scorecards or in a live gaming format, and use the results as grounds for dismissing poorly performing workers. Unfortunately, this approach can raise worker stress, lower job satisfaction and increase turnover. It’s an ill-advised practice.

The good news is that companies that use new sources of workplace data to help employees learn, grow or make their jobs easier, can outperform those that use the data primarily to monitor and penalize individual employees. And employees are optimistic:

  • 81 percent of those surveyed said they believe new workforce technologies will improve their learning, growth and career development.
  • More than twice as many employees are positive about the impact of new technologies and sources of workplace data on employees, than those who are negative.

It’s clear that when employees feel empowered and represented, they are more effective and happier in their jobs. By approaching data-mining with fairness and an employee focus, companies can grow the skills of employees, helping to empower them.

Using data to help employees improve, rather than using it to punish them, helps raise optimism and overall performance. With the right motives, tracking employees can be beneficial. All of this helps position data gathering as a positive, not a negative, practice.

In order to create an environment of trust, employers should clearly communicate with employees about their data-mining practices, with transparent processes and open understanding of how the data will be used.

If you’re interested in learning more about the organizational DNA and how data can be used to elevate an organization’s productivity while increasing employee satisfaction and safety, I encourage you to read the Accenture report “Decoding Organizational DNA: Trust, Data, and Unlocking Value in the Digital Workplace.”

Submit a Comment

Your email address will not be published. Required fields are marked *