Leveraging the capabilities of Generative AI (Gen AI) holds the potential for impactful change across various sectors, including HR and its related departments.
Before we dive in, here are three key reminders for the start of your journey with Gen AI:
- Gen AI is a tool for enhancement, not a workforce replacement.
- Ethical use relies on human oversight to prevent bias and discrimination.
- Safeguarding and inclusion should form the foundation of all Gen AI implementations.
Now, let's start by exploring what generative AI is, examining the venture capital (VC) funding landscape for artificial intelligence (AI) startups, and then uncovering emerging trends in the Gen AI market. We will also delve into why business owners are increasingly drawn to its applications and address pivotal concerns, particularly unmonitored bias towards marginalized groups.
Let's Start with the Basics: What Is AI vs Gen AI?
The key distinction between AI and Gen AI is their functions and applications:
- AI (Artificial Intelligence): Traditional AI, often referred to as Narrow or Weak AI, is primarily focused on intelligently performing specific tasks. It involves systems designed to respond to a particular set of inputs, utilizing the capability to learn from data and make decisions or predictions based on that data. (Contributor: Bernard Marr, Forbes)
- Generative AI: utilizes existing data to create new, realistic content across various domains, including images, video, music, speech, text, software code, and product designs, while retaining the characteristics of the training data. (Source: Gartner)
Let's zoom out to broaden our perspective and explore the foundational gaps that contribute to AI bias in the first place. This journey begins with the lack of diversity in both the startup funding stage and the pool of developers and software engineers that are hired.
VC Backing for AI Startups: Confronting Funding Disparities
Numerous AI startups are competing for a piece of the Gen AI market, frequently depending on venture capital funding to fast-track their expansion. This financial support significantly influences the development and trajectory of this technology.
Investors are primarily thrilled by the remarkable potential AI holds, as major tech companies directed significant funding towards a handful of chosen firms and venture capitalists actively sought out startups operating at the AI-industry crossroads. Kevin Dowd, a writer at Carta specializing in the private markets, notes, "Much of the excitement and anticipation surrounding the recent AI surge stems from the vast opportunities on the horizon."
Generative AI startups attracted a substantial $4.5 billion in investments in 2022, as revealed in PitchBook's latest vertical snapshot. This contrasts with the figures cited by Forbes contributor Cindy Gordon in August of this year. Gordon reports that according to Crunchbase “funding for AI-related startups surpassed $23B in 2023” and expresses "...over a quarter of the total investments in U.S.-based startups this year have been directed toward AI companies, more than double the percentage of the previous year.”
While there are substantial investment opportunities in the AI startup realm, concerns around the allocation of VC funding remains a pressing matter. The discourse was further fueled by the controversy surrounding Mistral AI, a four-week-old startup co-founded by three white male alums from Google's DeepMind and Meta—remarkably, the company secured $113 million in a seed round, valuing it at $260 million. This scenario spotlights the need to address funding disparities within the AI startup ecosystem—especially the gender imbalance.
The Alan Turing Institute's report, "Rebalancing Innovation: Women, AI, and Venture Capital in the UK," highlights a significant gender disparity in AI funding. Here are some key findings:
- Average capital raised by female-founded AI companies (2012-2022): £1.3 million ($1.6 million).
- All-male founder teams raised a significantly higher amount: £8.6 million ($10.5 million) over the same period.
- Startups with all-female founder teams secured just 2.1% of all funding deals within the same timeframe.
- In contrast, AI companies founded by men dominated with 79.6% of all deals.
- The total capital invested in the sector overwhelmingly favored companies with male founders, accounting for 79.3% (£55.1 billion/$67.3 billion).
Investing in AI, especially in the field of Gen AI, necessitates a thoughtful approach to collaboration from the very beginning. Entrepreneurs and investors must prioritize inclusivity in their partnerships from the outset to ensure that these developments are ethical, inclusive, and considerate of potential biases.
A White, Male-Dominated AI Workforce: Unpacking the Gaps
A more proactive approach is required to challenge homogeneity in the AI and data science workforce, actively advocating for increased gender, ethnic, socioeconomic class, and disability diversity among AI developers. In the book 'A Girls' Guide to a Future Dominated by AI: How to Talk to Robots,' author Tabitha Goldstaub emphasizes the interconnectedness of gender bias with other issues like race, class, sexuality, and ability, stating that "gender bias doesn't occur in isolation."
Leonardo Mattiazzi, a contributor to Entrepreneur, points out, “Tech has primarily pulled candidates from the same finite talent pool for decades” and adds that “62% of all tech workers are white, and 75% are male”. Mattiazzi raises the important question: “What if we expanded our talent pools to include a more diverse range of candidates, such as women, people of color, global workers, [disabled people], and formerly incarcerated individuals?”
The lack of diversity, coupled with the extra hurdles women encounter in progressing or retaining their roles within the AI workforce, creates a notable gap that directly influences the machine learning stage of AI development. As Kay Firth-Butterfield, Head of AI & Machine Learning and Member of the Executive Committee, World Economic Forum, emphasizes, "To achieve genuine diversity in AI, we must welcome individuals into the field who offer distinct perspectives and ways of thinking." As a result, this can intensify pre-existing inequalities, solidifying stereotypes through the model's socialization process. When these models are developed by a non-diverse workforce, there's a potential to magnify biases already present in our broader society.
Key takeaways from Young, E., Wajcman, J., and Sprejer, L.'s 2021 report, "Where are the Women? Mapping the Gender Job Gap in AI," published by The Alan Turing Institute:
- The AI and data science fields exhibit a significant gender imbalance.
- Globally, more than three-quarters (78%) of professionals in these fields are male.
- Less than a quarter (22%) of the workforce in AI and data science is composed of women (World Economic Forum, 2018).
- Women in data and AI are underrepresented in technical industries like Technology/IT but overrepresented in less technical sectors such as Healthcare.
- Underrepresentation of women in C-suite positions is more pronounced in data and AI roles within the tech sector.
- Higher job turnover and attrition rates among women in AI and data science within the tech sector.
AI industry leaders must prevent the perpetuation of societal bias in AI systems by prioritizing inclusion in their sourcing and recruitment during development and engineering.
Mirroring Human Error in Generative AI: Underlying Causes
“Algorithms are opinions embedded in code,” as stated by American mathematician and data scientist Cathy O'Neil in her TED Talk in 2017.
Gen AI models have been scrutinized for their potential to perpetuate biases, particularly those that affect marginalized groups. These biases can manifest in various ways, including in automated content creation, image synthesis, and natural language processing. Addressing this issue is crucial for the responsible and ethical development of Gen AI.
In the book 'A Girls' Guide to a Future Dominated by AI,' author Tabitha Goldstaub outlines 5 causes of AI risks—here’s an overview:
- Cause 1 - “Biased Data Sets”: The common perception is that AI remains impartial because its foundation is rooted in mathematics, where numbers are considered objective and unbiased. But what if the very data and instructions we feed into AI systems are tainted with biases? Goldstaub raises a profound question - our ingrained biases can inadvertently seep into the systems we create and program.
- Cause 2 - “Lack Of Diversity In The Workforce”: Goldstaub echoes that in the USA and UK, the tech industry has operated with a longstanding emphasis on "white, male-focused programming," which has gone unchallenged for decades, and these search results simply reflect the ongoing status quo.
- Cause 3 - “AI Is A Black Box”: This phrase points to the "hidden" or concealed nature of AI processes, which limits data scientists' access to what is beneath an AI system's interpretations of the world. It hinders their ability to scrutinize whether the data used to reach a specific conclusion has been "influenced by inequality, bias, or discrimination."
- Cause 4 - “Lack Of Accountability”: AI's autonomous nature makes it difficult to understand the reasons behind its actions. As a result, responsibility has often been shifted, with technologists occasionally blaming the technology itself when AI systems cause harm. Goldstaub emphasizes that laws need to change to hold management and decision-makers accountable.
- Cause 5 - “Digital Divide”: Goldstaub explains that "poverty and age are predictors of access to technologies, known as the digital divide." The lack of diversity in the AI workforce also affects who can access AI technologies, impacting the data from excluded groups and hindering the development of products for marginalized communities. This perpetuates existing injustices and widens the gap between the wealthy and excluded populations.
You might be wondering, "How can we reduce these risks to drive meaningful change?"
Tips on Mitigating AI Risks
Here’s an overview from a fact sheet by Biden-Harris Administration outlining voluntary commitments for leading AI companies to manage the risks posed by AI—emphasizing safety, security, and trust, represent a crucial stride in the pursuit of responsible AI development.
Ensure Product Safety:
- Internal and external security testing of AI systems pre-release.
- Share risk management information with industry, governments, civil society, and academia.
Prioritize Security:
- Invest in cybersecurity and insider threat safeguards.
- Enable third-party discovery and reporting of AI system vulnerabilities.
Build Trust:
- Develop technical mechanisms for user awareness of AI-generated content.
- Publicly report AI system capabilities, limitations, and appropriate/inappropriate use.
- Prioritize research on societal risks, addressing bias, discrimination, and privacy protection.
Tackle Societal Challenges:
- Develop and deploy advanced AI systems to address critical societal issues.
AI technologies offer immense potential while also requiring vigilant risk management to foster responsible development and deployment across various applications. A critical question all teams should ask themselves is, "Who am I trying to serve in society, and are they represented internally?"