Artificial intelligence (AI) is rapidly transforming human life and work, raising profound ethical questions about the responsibilities of those who develop, market, and deploy these technologies. At a recent conference held at The Catholic University of America’s Columbus School of Law, a diverse group of experts convened to explore the ethical frameworks that should guide Big Tech in the era of AI, emphasising a human-centred approach grounded in Catholic social teaching.
Taylor Black, director of AI & Venture Ecosystems in Microsoft’s Office of the Chief Technology Officer and founding director of Catholic University’s new interdisciplinary Institute for AI & Emerging Technologies, set the tone in his keynote. He urged attendees to shift conventional tech discussions from metrics like speed and scale to deeper questions: “What does this technology do to the human person? Who is harmed, who is helped, and who is left behind? What leads to human flourishing and the common good?” Black drew upon Catholic social teaching to highlight the inherent dignity of every human, a dignity that technology cannot confer or revoke. He called for shaping “the moral ecosystem in which technology is built,” urging collaborative frameworks for responsible AI development and broad accountability beyond mere legal compliance.
The conference panelists tackled pressing issues such as corporate accountability and the darker uses of technology. Danielle Bianculli Pinter, chief legal officer at the National Center on Sexual Exploitation Law Center, highlighted the paradox that while Big Tech corporate responsibility teams may have genuine concerns, they are often sidelined by executives. She pointed out the industry’s extensive self-protection via lobbying, regulatory avoidance, and near-blanket legal immunity, advocating instead for the implementation of liability to address this societal crisis.
Annick Febrey of the Better Trade Collective illustrated how technology facilitates forced labour by luring workers into deceptive job offers, resulting in millions trapped in exploitation. John Cotton Richmond, a former U.S. ambassador-at-large to combat trafficking, described technology itself as morally neutral, capable of serving good or ill depending on human intent. He lamented how human traffickers commodify people for illicit profit, often taking advantage of technological platforms.
The ethical conversation continued in a panel on corporate responsibility amid AI’s rapid evolution. Maryann Cusimano Love, chair of Catholic University’s Department of Politics and consultant to the Holy See Mission at the United Nations, noted that the Catholic Church has long engaged with industry on embedding ethics into AI systems. She referenced the “Rome Call for AI Ethics,” a Vatican-led initiative co-signed by Microsoft and IBM in 2020, which advocates transparency, inclusion, responsibility, impartiality, reliability, and security/privacy in AI development. This document, while a form of soft law, carries normative power that can influence future hard legal requirements.
Industry voices such as Paul Lekas from the Software & Information Industry Association stressed that consensus largely exists on AI ethics principles, but the challenge lies in their practical implementation, translating moral imperatives into concrete technical guardrails. Legal scholar Charles Duan emphasised the need to render these ethical considerations into forms that AI systems can operationalise, ensuring adherence in real-world applications. Challenges like balancing AI training on copyrighted materials with fair use principles were also discussed, reflecting the complexity of aligning innovation with legal and ethical norms.
The conference also featured a sobering personal testimony from Representative Brandon Guffey, whose son tragically died after falling victim to sexual extortion via social media, underscoring the urgent need for safeguarding users against online harms.
In the final panel focusing on sustainability and risk management, experts addressed the lag in regulation compared to fast-moving business dynamics. Environmental justice advocates reminded attendees that sustainability is now widely recognised as a material, rather than optional, business factor, despite political divisions. The environmental impact of data centres, massive facilities underpinning cloud and AI services, was identified as an underreported issue deserving greater scrutiny.
The conference exemplifies a holistic approach, blending faith-based ethics, legal insight, and technological expertise to forge pathways for AI that promote human dignity, accountability, and the common good. Catholic University’s new AI Institute, led by Taylor Black, aims to continue this interdisciplinary engagement, drawing upon fields as varied as engineering, philosophy, and theology to influence AI innovation responsibly.
The Rome Call for AI Ethics stands as a key international benchmark in this effort. Originally launched in 2020 with backing from the Vatican, Microsoft, IBM, and now including Cisco and other tech leaders, it underscores an ethical commitment to AI that serves humanity broadly, respects dignity, and resists exploitation or displacement of people purely for profit. Initiatives aligned with this vision seek to move beyond voluntary codes to embedding enforceable standards in AI development, reflecting a growing global imperative for ethical governance in an age of transformative technology.
📌 Reference Map:
- [1] (OSV News) - Paragraphs 1, 3, 4, 5, 7, 8, 9, 10, 11, 12
- [2] (Catholic University Media Release) - Paragraphs 2, 13
- [3] (Catholic News Agency) - Paragraph 2
- [4] (AP News) - Paragraph 6
- [5] (IBM Newsroom) - Paragraph 14
- [6] (Academy for Life) - Paragraph 6
- [7] (Rome Call) - Paragraph 6
Source: Noah Wire Services