There is a new Cambrian Explosion underway. AI tools are maturing and multiplying at a speed that is stunning even to tech industry veterans like me.
Do not allow yourself to doubt that the AI revolution will transform society. ChatGPT and Google Bard saw adoption by hundreds of millions of users at the start of 2023 — an insane pace of growth. Within weeks, countless students, employees, and government workers — not to mention new kinds of poets, artists, and composers — were using generative AI for a rapidly expanding list of tasks previously imagined to be exclusively human in nature.
“Do not allow yourself to doubt that the AI revolution will transform society. ChatGPT and Google Bard saw adoption by hundreds of millions of users at the start of 2023.”
There are pundits who see promise in AI and pundits who see doom in AI, but too much of this discussion is high-minded and theoretical. What we really need is to have a practical, brass-tacks conversation about AI as realists — about the real world in the age of AI and about Canada’s place in it.
Canada’s Opportunity With Generative AI
Make no mistake — Canada must learn to use generative AI productively. As Canadians, we cannot afford to have employees, companies, inventors, creators, or government agencies that are considerably less productive or less effective than those of our global peers — much less than those of our global antagonists, all of whom are adopting generative AI as rapidly as possible.
“Let’s be brutally honest with ourselves and admit that Canada is now unlikely to seize the initiative to provide AI to the world any time soon.”
Yet we must recognize our interests in AI do not overlap perfectly with those of other nations. The United States has become the clear leader in producing AI systems, a position they are likely to ultimately share with China. Canada, long a leader in AI research, could once have imagined sharing this role, but the relentlessly entrepreneurial United States and regrettably authoritarian China each leapt early, while we didn’t. Let’s be brutally honest with ourselves and admit that Canada is now unlikely to seize the initiative to provide AI to the world any time soon.
But there’s no time for regrets. Instead, we must understand that, unlike Canada, the United States and China now stand to reap massive economic benefits from global AI adoption — and that these benefits will make it easy for them to turn a blind eye to, or at least to bear, some of the key threats and costs associated with generative AI risks in a way that isn’t true for Canada.
Generative AI Risks: Our Data Should Be Ours
Chief among these is the fact that AI gathers data like a vacuum. As a computing system, it ingests and stores literally everything that happens by. Whether we mean personal information, health records, corporate secrets, creative feats, or national strategies, we mustn’t let AI leaders lull us into complacency with assurances about what they’ll do with our data, because the incentives are clear. AI systems that learn more, do more. AI systems that can do more will be used more. And AI systems that are used more will gather yet more data from users around the world to learn from. In this circle, the parties gaining data benefit and the parties giving away their data ultimately lose.
“And at the dawn of the AI age, Canadians are rapidly, breathlessly, giving our data away to nations and economies outside of Canada that are providing AI to the world.”
The Canadian economy once ran on oil, but as we decouple ourselves from petroleum in the 21st century, the Canadian economy now runs on data. Data is the new oil. And at the dawn of the AI age, Canadians are rapidly, breathlessly, giving our data away to nations and economies outside of Canada that are providing AI to the world.
Every time a Canadian asks ChatGPT to summarize a document, perform a data conversion, help in developing a corporate strategy, or draft a letter, we are giving away intellectual property, secrets, and data assets. The same is true every time a Canadian uses one of the already innumerable software tools that has been recently “AI-enabled,” whether this means having Grammarly’s AI “read” internal documents or having Zendesk’s AI “help with” customer interactions.
“Every time a Canadian asks ChatGPT to summarize a document, perform a data conversion, help in developing a corporate strategy, or draft a letter, we are giving away intellectual property, secrets, and data assets.”
Canada does not reap the bulk of the benefits that accrue from providing this data to AI; Silicon Valley does. The United States does. In the AI “gold rush” (in which much of the gold won’t be ours in the end), we are giving away the data that is now foundational to Canada’s prosperity. Whatever the eventual consequences for humanity, the much more immediate and practical consequences for Canada could be grave.
How to Mitigate Generative AI Risks
There is no time to waste. As a nation, we must rapidly take steps to ensure that we don’t enter the dawn of the AI era by giving away the store.
1. Spread Awareness and Educate
We must spread awareness of generative AI risks. We must explain to our students, employees, and government workers, and to our new poets, artists, and composers, that generative AI is not intrinsically benign. Whether it’s ChatGPT or some other “AI-enabled” tool, whatever AI can see, AI can take for its own. In this Cambrian explosion of AI tools, AI is increasingly everywhere. Unconsidered or even inadvertent AI use may mean the devaluation of intellectual property, the escape of private data into the public sphere, or the inadvertent release of government secrets. Each case is a risk the Canadian public and workforce are mostly ignorant of today and this is an ignorance we can’t afford.
“We must decide when AI can be used, for what tasks, and with what guardrails in place to prevent the sharing of critical data.”
2. Deploy AI Governance Now
As a nation, we must move quickly to deploy AI governance, not just in the sense of statutes and laws, but in the organizational sense more broadly. Schools, companies, hospitals, and government agencies must all quickly begin to deploy policies to internally govern the use of AI for day-to-day work. We must decide when AI can be used, for what tasks, and with what guardrails in place to prevent the sharing of critical data. This may mean painful trade-offs. It’s likely that there will be compelling AI capabilities that we can’t immediately leverage because to do so would give our greatest assets away to other nations and economies.
“We need tools that remind employees and creators who might otherwise forget that what they are typing right now is being seen by an AI system with no internal moral or ethical compass.”
3. Develop Software Tools to Combat Generative AI Risks
Finally, we must develop the tools, particularly software tools, to make these policies practically enforceable. We need to be able to immediately detect cases in which sensitive or valuable data is about to be seen by an AI system, and to technically intervene — at least long enough to provide relevant warnings. We need tools that remind employees and creators who might otherwise forget that what they are typing right now is being seen by an AI system with no internal moral or ethical compass, but that may be voraciously learning everything it can.
It’s likely too late, at least for the moment, for Canada to gain from AI adoption in the same way that Beijing or Silicon Valley will. But we mustn’t let that, or arguments about what AI means for the next millennium of human existence, distract us from the details of today. As a nation, we have much to lose to AI if we’re not careful, and these unacceptable losses are virtually guaranteed unless we take the steps I’ve outlined above.
The time to act is now.