Companies working with AI in Canada will receive a new voluntary code of conduct on how advanced generative artificial intelligence is used and developed in this country.
And while there has already been support from the business community, there are also concerns that it could stifle innovation and the ability to compete with companies based outside of Canada.
Advanced generative artificial intelligence often refers to the types of AI that can produce content. ChatGPT is a popular example, but most systems that generate audio, video, images, or text would also count.
Companies that sign the code accept multiple principlesincluding that their AI systems are transparent about where and how the information they collect is used, and that there are methods to address potential biases in a system.
Additionally, they accept human monitoring of AI systems and that developers creating generative AI systems for public use create systems so that anything generated by their system can be detected.
“I think if you ask people on the street, they will want us to take action now to make sure that we have specific measures that companies can take now to build trust in their AI products,” said Industry Minister François-Philippe Champagne, at an AI-focused conference in Montreal last Wednesday.
Legislation such as Bill C-27, which would update privacy legislation and add rules regulating artificial intelligence, is still making its way through Parliament.
The voluntary code would therefore offer another method for the federal government to set rules for companies to make products that people can trust before they even use them, or even if they choose to use them.
BlackBerry and Telus among the signatories
Canadian technology company BlackBerry, which uses generative AI in cybersecurity products, is one of the initial signatories of the voluntary code.
If the road did not have directions and traffic lights, everything would be chaos. And I think that’s how I see it… in terms of trying to build trust.– Charles Egan, BlackBerry CTO
According to the company’s CTO, the idea is to make sure there is trust in an AI product before even using it, and that’s a bit of a cultural shift for some.
“People are always using cell phones, computers and networks, and then we try to build trust after the fact,” said Charles Egan in an interview with Breaking:.
“I think AI, especially generative AI, has fantastic potential… so if we implement some guidelines, we can enjoy the benefits and reduce some of the potential dangers of this explosion of generative AI that we are all experiencing,” he said Egan.
Egan noted that one advantage he and his company see in the Canadian code of conduct is that it imposes requirements primarily on AI developers, and he believes this means far fewer restrictions for consumers who want to purchase or use generative AI technology.
“If the highway didn’t have directions and traffic lights, things would be chaos. And I think that’s how I see it and BlackBerry sees it in terms of trying to build trust in this world of AI,” Egan said.
The code of conduct is a ‘step’
Although the code is voluntary, lawyer Carole Piovesan said it is part of a growing ecosystem of regulation and legal measures in Canada.
“This is a step in the process to introduce some kind of more enforceable measures,” said Piovesan, who explained that there are “immediate concerns” as generative AI like ChatGPT or image generators become increasingly popular.
According to Piovesan, the federal government is using the voluntary code to complement and bridge mandatory standards that are still being drafted or passed into law.
Canada’s measures will also be on par with those of the United States and the European Union, in Piovesan’s opinion.
“What Canada is doing in terms of regulating artificial intelligence is trying to be consistent with other jurisdictions like the EU and the US. The EU is very close to passing a fairly prescriptive law called the EU Artificial Intelligence Law,” she said.
Concerns over “stifling” industry influence
However, other companies in Canada have expressed concern about the code, despite its current voluntary nature.
The Shopify CEO criticized the government’s initiative on X, formerly known as Twitter.
Tobi Lütke wrote that he will not support the code of conduct.
“We don’t need more arbitrators in Canada. We need more builders. Let other countries regulate while we take the braver route and say ‘come build here.'”
Shopify did not respond to a Breaking: request for comment on Lütke’s post.
And there are mixed feelings among other members of the Canadian industry, too.
“Is it an important thing to include, especially when it comes to consumer data, privacy and cybersecurity? Yes,” said Jeff MacPherson, co-founder of XAgency AI.
“But there is also an aspect [having] ability to slow the growth of the industry,” MacPherson told Breaking:.
XAgency AI develops private generative AI technologies in fields such as business automation and marketing. You have not yet signed the code of conduct; MacPherson said the team is waiting to see what happens with it and how the industry evolves with the code implemented.
One of their concerns is that different or stricter rules in Canada could make competition more difficult, citing some European tech regulations in other sectors unrelated to artificial intelligence that result in companies choosing not to offer services there.
“It can put Canadians at a disadvantage,” he said. “There are a lot of these big tech companies and when these regulations are put in place… they just don’t allow the technologies to be used within the country.”