In recent years, efforts by technology companies to address bias and promote inclusivity in artificial intelligence (AI) products have come under intensified scrutiny amid shifting political landscapes in the United States. At the centre of these developments is the growing debate over the role of diversity, equity, and inclusion (DEI) initiatives in AI, with former and current administrations adopting contrasting stances on the subject.

Harvard sociologist Ellis Monk, a specialist in skin tone bias known for developing the Monk Skin Tone Scale, was approached by Google several years ago to help make its AI image recognition tools more inclusive. His work led to improvements in how AI interprets a broad spectrum of skin tones, thereby addressing long-standing issues of misrepresentation in computer vision technologies. “Consumers definitely had a huge positive response to the changes,” Monk said, reflecting on the reception of the updated tools which replaced outdated dermatological standards primarily suited for white skin.

Notwithstanding the success of such initiatives, recent political developments signal a potential slowdown in the momentum for inclusive AI projects. Last month, the House Judiciary Committee, led by Republican members, issued subpoenas to numerous major AI companies including Amazon, Google, Meta, Microsoft, and OpenAI to investigate alleged censorship and impartiality in AI systems. The committee chair, Representative Jim Jordan, has expressed concerns over what he characterises as possible coercion by the current administration of these companies to restrict lawful speech.

In a policy shift, the U.S. Commerce Department’s standard-setting division has removed references to AI fairness and responsible AI from its collaborative research guidelines, opting instead to focus on “reducing ideological bias” to encourage “human flourishing and economic competitiveness.” Michael Kratsios, former director of the White House Office of Science and Technology Policy, criticised the Biden administration’s approach at a recent event in Texas, stating that its AI policies “were promoting social divisions and redistribution in the name of equity.”

The backdrop to these political interventions is a well-documented history of algorithmic bias causing harm. Studies have demonstrated that self-driving cars tend to have difficulty recognising pedestrians with darker skin, and AI-generated images often reproduce stereotypes, such as overwhelmingly depicting surgeons as white men. Additionally, face recognition software has been shown to perform unevenly across racial groups, raising concerns about wrongful arrests and misidentifications. Google itself faced criticism in 2015 when its photo app mistakenly labelled images of Black individuals as “gorillas.”

The arrival of OpenAI’s ChatGPT in 2022 further accelerated commercial AI development, fuelling competition and necessitating the relaxation of caution on some fronts by companies like Google. However, the rollout of Google's Gemini AI chatbot revealed challenges in balancing representation, as attempts to mitigate bias resulted in historical inaccuracies, such as generating images of American founding fathers with diverse ethnic appearances. This incident became a focal point for conservative backlash against what has been pejoratively termed “woke AI.”

Vice President JD Vance cited this example during an AI summit in Paris in February, asserting that such instances illustrate “downright ahistorical social agendas through AI.” Vance affirmed the Trump administration’s commitment to preventing ideological bias in AI and safeguarding free speech rights.

Alondra Nelson, a former acting director of the White House Office of Science and Technology Policy under the Biden administration, highlighted the irony in the current scrutiny around “ideological bias,” noting that it fundamentally acknowledges concerns about algorithmic bias that researchers have sought to address for years. She commented, “Fundamentally, to say that AI systems are ideologically biased is to say that you identify, recognise and are concerned about the problem of algorithmic bias.”

Nonetheless, Nelson expressed scepticism about the prospects for bipartisan cooperation on these issues given the prevailing political climate. “Problems that have been differently named — algorithmic discrimination or algorithmic bias on the one hand, and ideological bias on the other —- will be regrettably seen as two different problems,” she said.

The evolving discourse reflects a broader tension between advancing technological innovation and addressing social equity. As tech companies navigate regulatory investigations and shifting government priorities, the future of inclusive AI development remains uncertain. While existing inclusive technologies like the Monk Skin Tone Scale persist in widely used products, questions loom over sustained investment and support for initiatives aimed at ensuring AI serves diverse global populations effectively.

Source: Noah Wire Services