CAMBRIDGE, Mass. — Technology companies and researchers working towards diversity, equity and inclusion (DEI) in artificial intelligence (AI) products now face shifting political and regulatory challenges that threaten to alter the trajectory of their efforts. The evolving political climate in the United States has resulted in heightened scrutiny of AI fairness initiatives, with some policymakers recasting concerns over algorithmic bias as allegations of so-called “woke AI.”

Several years ago, Google enlisted the expertise of Ellis Monk, a Harvard University sociologist, to strengthen the inclusivity of its AI technologies, particularly in the field of computer vision. This branch of AI attempts to enable machines to interpret and understand images, yet has a documented history of replicating societal biases, especially those affecting people of colour. Monk, whose academic focus includes colourism—discrimination based on skin tone—helped Google adopt a refined skin tone scale that replaced an outdated standard originally created for white dermatology patients. His scale now supports the improved depiction of a broader range of skin colours across multiple products, including camera phones and AI image generators. “Consumers definitely had a huge positive response to the changes,” said Monk.

Despite notable progress like this, recent developments in Washington reflect a changing mood that has raised concerns among DEI experts about the future of such work. Following the retreat of many tech firms from workplace DEI initiatives, government oversight has now intensified around AI fairness. The House Judiciary Committee has issued subpoenas to Amazon, Google, Meta, Microsoft, OpenAI, and other companies to investigate whether AI products are censoring lawful speech—a line of inquiry spearheaded by Republican lawmakers framing equitable AI efforts as politically motivated interference. According to the Associated Press, the U.S. Commerce Department’s standard-setting division has also removed references to AI fairness and “responsible AI” from its collaborative research appeal, instead urging efforts to “reduce ideological bias” in AI with an emphasis on “human flourishing and economic competitiveness.”

Monk expressed scepticism about the longevity of DEI-aligned AI programming under these conditions. While his own skin tone scale has been widely integrated and is unlikely to be undone, he acknowledged the risk that shifting political priorities and pressures for rapid product development might lead to reduced funding and fewer such projects. “Google wants their products to work for everybody, in India, China, Africa, et cetera. That part is kind of DEI-immune,” Monk said, “But could future funding for those kinds of projects be lowered? Absolutely, when the political mood shifts and when there’s a lot of pressure to get to market very quickly.”

The backdrop to these tensions includes studies revealing significant disparities in AI performance. For instance, self-driving car technology struggles to detect darker-skinned pedestrians, increasing their risk of harm. AI image generators often produce stereotypical, less diverse representations, such as overwhelmingly depicting surgeons as white men. Facial recognition software has faced criticism for misidentifying people of Asian descent and wrongfully implicating Black men in wrongful arrests. Notably, Google’s own Photos app once categorised images of Black individuals as “gorillas,” a widely reported incident from a decade ago. These entrenched biases have prompted calls for reform and heightened scrutiny from governments and civil rights advocates alike.

The election of President Joe Biden accelerated the tech sector’s focus on AI fairness, coinciding with the launch of OpenAI’s ChatGPT in 2022, which spurred a commercial surge in AI applications. Google’s own AI chatbot, Gemini, launched with safeguards against bias, but its image generation tool faced criticism for overcorrecting—depicting people of colour and women in historically inaccurate contexts, such as portraying American founding fathers as Black, Asian, and Native American men. Google responded with apologies and temporarily withdrew the feature. However, this incident became a touchstone for conservative criticism of what was labelled “woke AI.”

At a February AI summit in Paris, Vice President JD Vance, attending alongside Google CEO Sundar Pichai, condemned the propagation of “downright ahistorical social agendas through AI,” citing the Gemini image generator episode. Vance emphasised his intention to ensure “AI systems developed in America are free from ideological bias and never restrict our citizens’ right to free speech.”

This political critique contrasts with perspectives from researchers like Alondra Nelson, a former Biden science adviser and acting director of the White House Office of Science and Technology Policy, who framed concerns about “ideological bias” as an acknowledgment of longstanding issues with algorithmic bias impacting many facets of life, including housing, healthcare, and finance. Nelson described the distinction between “algorithmic discrimination” and “ideological bias” as largely semantic but noted the current political climate’s impact on cooperation. “I think in this political space, unfortunately, that is quite unlikely,” she said. “Problems that have been differently named — algorithmic discrimination or algorithmic bias on the one hand, and ideological bias on the other — will be regrettably seen as two different problems.”

As AI technologies advance and become increasingly embedded in daily life worldwide, the unfolding debate in Washington highlights the complex intersection between technological innovation, societal equity, and political ideology. The prevailing regulatory and funding environment will likely play a critical role in shaping which AI fairness initiatives continue to evolve within the US tech industry and beyond.

Source: Noah Wire Services