Why business leaders must act now to address bias in AI

Innovation

Recent research reveals AI systems are often perpetuating and amplifying existing societal biases rather than eliminating them. Currently, only 22% of AI professionals are women.
Technician programming human robot through tablet PC at workshop

A key factor contributing to AI bias is the lack of diverse perspectives in its development. Currently, only 22% of AI professionals are women, and representation of other marginalized groups is even lower. This homogeneity in AI development teams means that potential biases often go unnoticed until systems are deployed in the real world.

“The train has left the station,” says Salles-Olivier. “It’s now a matter of how we correct it and regain agency and power.” This sentiment underscores the urgency of the situation – the longer we wait to address these biases, the more deeply embedded they become in our AI systems.

The representation problem

Recent research reveals a troubling reality: AI systems are often perpetuating and amplifying existing societal biases rather than eliminating them. According to AI researcher Nathalie Salles-Olivier, who studies bias in HR systems, “61% of performance feedback reflects the evaluator more than the employee.”

When this already biased human data is used to train AI systems, the result is a compounding effect that can create deep-rooted systematic biases in automated decision-making processes.

The business implications of biased AI systems extend far beyond ethical concerns, creating tangible impacts on a company’s bottom line. When AI systems perpetuate bias in recruitment, organizations miss out on valuable talent that could drive innovation and growth.

These systems tend to reinforce existing patterns rather than identify novel approaches, stifling creative problem-solving and limiting new perspectives. Additionally, biased AI exposes companies to legal vulnerabilities and reputational damage, while simultaneously restricting their market reach by failing to understand and connect with diverse customer segments.

Four strategies to address AI bias

To effectively address AI bias, companies must implement a comprehensive strategy that encompasses four key areas.

1. Diversify AI development teams

Diversifying AI development teams should extend beyond traditional hiring practices. As Salles-Olivier points out, “Women tend to not engage in roles where they don’t feel like they have all the competencies that are necessary.” To counter this, companies need to create pathways for non-technical experts to contribute their perspectives. “I wanted to prove that people like me who’ve never coded before could get in and influence the direction that AI will take,” says Salles-Olivier, who built AI agents without any technical background.

2. Test and Audit AI Systems

Organizations must implement robust testing frameworks with comprehensive bias testing protocols before deploying AI systems. This testing should be followed by regular audits of AI decisions to identify potential discriminatory patterns. Including diverse stakeholders in the testing process helps catch bias issues that might be overlooked by homogeneous testing teams and ensures the system works effectively for all intended users.

3. Focus on Quality Data

The old programming adage “garbage in, garbage out” is particularly relevant for AI. Data quality forms the foundation of unbiased AI systems. Organizations must thoroughly audit their training data for historical biases that could be perpetuated by AI systems. This involves actively collecting more diverse and representative data sets that reflect the full spectrum of users and use cases. In cases where natural data collection might be insufficient, companies should consider using synthetic data generation techniques to balance underrepresented groups and ensure AI models learn from a more equitable data distribution.

Y

4. Maintain Human Oversight

Finally, while AI can enhance decision-making, human judgment remains crucial. Organizations should implement “human-in-the-loop” systems for critical decisions, ensuring that AI recommendations are reviewed and validated by human experts. Domain experts must have the authority to override AI recommendations when necessary, based on their experience and understanding of nuanced factors that AI might miss. Regular review and adjustment of AI system parameters helps ensure the technology remains aligned with organizational values and goals while preventing the emergence of unintended biases.

How to shape future outcomes

The future of AI will be shaped by the actions we take today. The challenge of addressing AI bias might seem daunting, but the cost of inaction is far greater. As AI systems become more integrated into business operations, the biases they contain will have increasingly significant impacts on business outcomes and society at large.

By working actively to reduce bias in their AI systems, businesses can help ensure that AI becomes a force for positive change rather than a perpetuator of existing inequities. Business leaders must:

  1. Assess current AI systems for potential biases
  2. Develop clear guidelines for ethical AI development
  3. Invest in diverse talent and perspectives
  4. Create accountability mechanisms for AI decisions

Look back on the week that was with hand-picked articles from Australia and around the world. Sign up to the Forbes Australia newsletter here or become a member here.

This article was originally published on forbes.com

Samantha Walravens is the co-author of GEEK GIRL RISING: Inside the Sisterhood Shaking Up Tech. She is a San Francisco-based writer who shares the stories of women entrepreneurs, engineers and executives challenging the status quo and paving the way for future generations of women leaders. Walravens teaches classes on diversity and innovation in the technology world at Lehigh University in the US and is author and editor of the New York Times-acclaimed anthology, TORN: True Stories of Kids, Career & the Conflict of Modern Motherhood.

More from Forbes Australia

Avatar of Samantha Walravens - Contributor
Topics: