As generative artificial intelligence (AI) technologies like ChatGPT and Google Gemini continue to transform the digital landscape, the conversation in Canada has largely centred on commercial innovation. Yet, there is a growing discourse around the potential for AI to be developed and governed as a public utility, echoing Canada’s longstanding tradition of public service media such as the CBC and Radio-Canada. This model raises pertinent questions about the future of AI in the country, suggesting an alternative approach grounded in public interest rather than commercial profit.
Commercial AI’s rise has hinged significantly on vast amounts of user-generated content freely available online, effectively treating the internet as a global "knowledge commons". However, this reliance on publicly sourced data has sparked concerns over who benefits from these technologies. Canada’s historical connection to AI innovations, such as the early automated translation efforts using Canadian parliamentary transcripts in the 1980s, illustrates a precedent for harnessing public data for AI development. The question now is whether Canada could intentionally shape AI’s future in a similar, publicly oriented manner.
An initiative like CanGPT has been proposed as a Canadian public-service AI model, inspired by efforts in countries like Switzerland, Sweden, and the Netherlands, which are exploring national AI systems designed to serve public needs. While the Canadian government has developed some internal AI tools, such as CANChat, a generative AI chatbot designed to boost productivity within federal employees, these remain limited in scope and not intended for wider public use. Meanwhile, Montréal’s arts-based organizations have expressed interest in commons-based AI infrastructures but face resource constraints, hinting at the potential advantages of a coordinated, national public initiative.
Public broadcasters like the CBC offer a fitting model for this approach. These institutions were established to ensure new communication technologies serve democratic ends, and a similar mandate could extend into AI development. Canada’s multilingual archives of audio, video, and text dating back decades could form a foundational dataset for a Canadian public AI, framed explicitly as a public good. A publicly governed AI model could provide open-source access across the country either through online platforms or local applications, embedding Canadian cultural and linguistic diversity into its core functionality.
Beyond access, CanGPT would invite essential discussions on the ethical and societal limits of AI technologies. Generative AI has been implicated in harmful uses, including deepfake pornography and other enabling forms of technology-assisted violence. Currently, corporations largely set content moderation and usage policies, decisions with significant political and social ramifications. A public AI initiative governed by democratic principles could shift these decisions away from private companies and facilitate public debate through transparent institutions on responsible AI governance.
This model contrasts with Canada’s existing AI infrastructure strategy. The federal government’s substantial investments, such as the AI Sovereign Compute Strategy and large-scale data centre projects, emphasise building expansive AI capabilities, much of which might benefit American tech firms more than Canadian public interests. Public AI models could instead prioritise smaller, more energy-efficient systems suited for targeted tasks, potentially reducing environmental impact and operational costs. Such frugality and intentionality could offer a more sustainable, less risky way forward amid concerns over an AI investment bubble.
Implementing a public AI system like CanGPT would not be without challenges. Funding, ongoing updates, and maintaining competitive performance compared to commercial AI will require rigorous planning and resources. Nonetheless, it would spearhead a vital national conversation on AI's social role, ethics, and governance, possibly redefining digital sovereignty in Canada. This vision aligns broadly with emerging federal efforts, such as the newly launched Artificial Intelligence Strategy for the federal public service in 2025, which seeks to establish an AI Centre of Expertise focused on secure, responsible AI, alongside government-led tools like CANChat designed to support internal users responsibly.
Moreover, recent steps towards responsible AI development in Canada show a growing commitment to transparency and ethics. For instance, the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, adopted by major organizations including CGI and IBM, outlines principles for ethical AI use. Meanwhile, guidelines issued to Canadian public servants stress ensuring transparency, mitigating cybersecurity risks, and avoiding discriminatory AI outcomes.
There is also a broader conversation about fostering public-private partnerships and open-source AI frameworks, as highlighted by the Canadian Chamber of Commerce. These discussions stress the importance of developing AI that is accessible, customizable, and secure, avoiding overreliance on proprietary systems controlled by global tech giants.
In essence, the idea of a national public AI, such as CanGPT, represents a bold reimagining of AI’s role in society. Rather than another subscription service from Big Tech, it could embody a distinctly Canadian approach to AI, one rooted in public good, cultural richness, democratic accountability, and environmental prudence. While the road to public AI innovation remains complex, opening this dialogue is critical to ensuring that AI fulfills its potential as a transformative tool that benefits all Canadians.
📌 Reference Map:
- [1] (The Conversation) - Paragraphs 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
- [4] (Government of Canada) - Paragraphs 4, 9
- [3] (Government of Canada) - Paragraph 9
- [5] (Government of Canada) - Paragraph 10
- [7] (Government of Canada) - Paragraph 10
- [6] (Canadian Chamber of Commerce) - Paragraph 11
- [2] (GC AI) - Indirectly supports transparency and public access themes in Paragraphs 1, 9
Source: Noah Wire Services