Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Walrus
The Walrus
Carmine Starnino

Forget ChatGPT—Canada Can Cook Up Its Own AI

Fenwick McKelvey spends a lot of time thinking about artificial intelligence, particularly about finding better ways to fulfill AI’s potential beyond the models marketed by Big Tech. As co-director of the Applied AI Institute at Concordia University and an associate professor in communication studies, he analyzes digital policies and has observed the global struggle of governments to regulate AI and manage its immense appetite for data and power. In February, he returned from a conference in Europe and was inspired by countries like Germany, Sweden, and the Netherlands, which are developing public AI services—open-source models accessible to academic institutions, researchers, companies, governments, and users alike.

I recently spoke to McKelvey about his belief that building what he dubs CanGPT—or Canadian GPT—would help define a national strategy for AI and protect our digital sovereignty.

Let’s start at the beginning. If users can already log on to ChatGPT, why would they need a Canadian version?

If you’re using ChatGPT—or Claude from Anthropic or Google’s Gemini—great. I’ve used the tool to help generate web code. I hear from my research students who use it to improve their writing. As long as these tools deliver, then we will keep using them, much like we do a search engine. I call ChatGPT a calculator for words. The thing is, however, that no one owns the idea of a calculator, but right now, we have let a few Big Tech firms dominate the AI market.

These firms have also built these AI tools in ways that should raise a lot of questions. The technology behind ChatGPT depends on collecting massive amounts of data, often from online sources. The assumption is that whatever is online is free game for commercial AI. Not everyone agrees with this. OpenAI faces court cases from writers, comedians, and the New York Times, who claim the firm collected and processed data without consent. That is just not something that we can accept going forward.

Regulators know this. Last year, in Ottawa, I joined Canada’s privacy commissioner, his staff, and their international peers at the Privacy and Generative AI Symposium Report. These were people leading investigations into OpenAI. The Italian privacy regulator had already decided the technology violated Europe’s data-protection laws. Canada’s office of the privacy commissioner was investigating if OpenAI properly handled Canadians’ personal information when training ChatGPT.

There are concerns not only about where the data comes from to train these models but how much energy these models consume and who collects all the prompts and questions we ask these models. The bigger question is how much more dependent AI might make us on Big Tech.

That is why I am trying to provoke a discussion about how we build AI, which gets me to CanGPT.

So what is a CanGPT?

CanGPT would technically be an open-source large language model available to Canadian users as an online service or, ideally, as an application. It could be used to do the things we’re already asking ChatGPT to do. CanGPT would be a different way to make the technology, not a different kind of technology. Right now, we have two big gaps: we don’t trust the AI technology we currently have, and we can’t imagine a better way of making it. So CanGPT is trying to fix both things by saying: We’re going to think of building a trustworthy, responsible AI that respects privacy and data rights and that doesn’t let corporate interests define its consequences for Canadian society.

Data for training a CanGPT could come from a wide range of Canadian institutions and government resources. Indeed, that’s how AI started. Christine Mitchell, writing for The Walrus, provided a good example of the value of our data when she described how, in the mid-1980s, some Hansard tapes—a magnetic reel containing millions of words of Canadian parliamentary dialogue in English and French—were leaked to IBM and started the beginnings of AI translation.

In the past, when there was debate about new technologies—think radio or television—one response was to create a public service, like the BBC and the CBC. Instead of letting companies build the future of AI, what if Radio-Canada or the CBC built their own AI? The CBC, for example, could easily use its back catalogue to train a model—which would solve the challenge of finding a large enough data set.

This is precisely the AI future that other countries are exploring. The Netherlands is trying to create its own GPT for the Dutch language. Same in Germany. These will become public resources.

How would the process of building CanGPT be different?

Again, the goal is not to disrupt how you use AI but to change what’s happening in the background—all the boring stuff that is very much about power and politics. CanGPT might run on your computer instead of the cloud. Right now, AI consumes not just a lot of data but a lot of energy. OpenAI CEO Sam Altman tweeted that the costs to run ChatGPT are “eye-watering.” The big do-it-all models come with huge environmental costs. Lots of research is exploring how to deliver AI frugally. Nicolas Chartier-Edwards, a PhD student, is looking into downscaling: very small models aiming for few parameters to increase their efficiency in very particular and targeted tasks. So imagine building CanGPT to work like ChatGPT but customized to train on specific kinds of data and thus carrying a smaller carbon footprint.

I’m still curious about why we need a CanGPT. What problems are we trying to fix?

The whole problem with AI is about public ownership. The law says, right now, that you can’t copyright AI-generated work. I want to push back against that. If an LLM is trained on massive amounts of public data, I want to ask if that LLM itself can be owned. Can everything that model has learned, really, be said to be owned by anyone? And if so, then what?

For all the hype of AI, we don’t talk about the implications of having a small group of companies establish the rules, dictate the agendas, prioritize certain harms while ignoring others, and take control of the policy narratives. I think the conversation has to shift to how societies can learn how to protect cultures and to preserve certain things as inextricably public.

But isn’t ChatGPT already available for public use?

ChatGPT is a free demo of a commercial service. Who knows how long it will be free. But no matter how you square it, ChatGPT is free only as long as OpenAI wants to make it free. It’s a commercial product, one that’s taken a massive cultural commons and shared it back to us.

If these companies are going to be profitable, they are going to have to charge consumers, triggering what Cory Doctorow has called enshittification. Prices go up, quality goes down—all to keep investors happy. We’re setting up the same problem for AI. Right now, Canada is investing $2 billion in AI infrastructure without mentioning public benefits. We need to find ways to deliver that investment back to everyday Canadians. We can’t trust the tech bros to do that.

Fenwick McKelvey is a member of The Walrus’s Educational Review Committee.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.