Ensuring that AI and other technology are ‘shaping the world we want to see’
The fact that Cohere, a Toronto-based AI startup that provides language models to power chatbots and search engines, recently raised $270 million in a funding round is just the latest sign that demand for artificial intelligence continues unabated.
But the rampant adoption of a tool with such incredible potential and disruptive power is also alarming. As Emilia Javorsky, a director of the Future of Life Institute, wrote in a open letter (now with over 30,000 signatures) calling for a six-month pause in training higher-level AI: mitigating those risks.”
While in June 2022 serving the Canadian government Account C-27, privacy legislation that, if passed, would affect regulations governing the design, development, and use of AI systems, the government is moving – out of necessity – slowly and painstakingly. AI technology, on the other hand, moves at lightning speed.
How can AI come about responsibly, can it be regulated and who should be involved to ensure it benefits society? We asked the experts to weigh in.
The conversation should stay focused on the here and now
Nick Frost, co-founder of Cohere
The space of AI is changing a lot, but the conversation is changing faster. There’s a lot of talk about long-term existential risks, and I’m afraid this obscures some of the more direct impacts the deployment of this technology will have on the job market and education. We’re really thinking about making sure we’re happy with the application of this technology today, as it is now — not what happens when this technology takes over. Many of these conversations get muddled and that makes it difficult.
As builders of technology, we want to make sure that its consequence in the world is something we are happy with and that it is used for good. So we spend a lot of time on data filtration and human feedback, making sure we align the model with our own beliefs and views on how this technology should be used. We try to interact with a wide variety of people and that includes other people in the space, the broad community, with people inside and outside of Cohere.
Ultimately, it’s up to the makers of the technology to make something they’re proud of. In the early 2010s, a claim by social media companies was, ‘we just make the technology; we cannot decide what is good and what is bad.’ That doesn’t fly anymore. People expect technology companies to make decisions and act as best they can.
We need to look for different perspectives
Deval Pandya, vice president and chief of AI engineering at the Vector Institute
We are in this age of machine learning and AI – it will affect everything. And my vision is that it will make a huge positive change in addressing some of the biggest challenges we face, such as the climate crisis and healthcare. At the same time, I don’t want to downplay that the risks of AI are very real.
We have enough resources and brains to work on all aspects of potential short-term and longer-term existential risks. We have the tools, we have the know-how to apply most of machine learning safely and responsibly. But we need wise governance to create guardrails to keep social norms intact, for example so that people cannot interfere in the democratic electoral process. That means there are certain rules you have to abide by, certain criteria you have to meet.
And what are those criteria? What is the equivalent of auditing for a machine learning system? There should be a thoughtful discussion. AI is impacting every industry and every aspect of society. It has far-reaching implications and encompasses not only technical aspects, but also social, ethical, legal, economic and political considerations. So we need different perspectives – we need social scientists, political scientists, social workers, researchers, engineers, systems people, lawyers to come together to create something that works for society.
Regulations cannot apply to everyone
Golnoosh Farnadi, Canada CIFAR AI Chairman; professor at McGill University; adjunct professor at the University of Montreal and core faculty member at MILA (Quebec Institute for Learning Algorithms)
We need to change the narrative that thinking about responsible AI, ethical AI will be detrimental to business. We need trusted parties, verifiers and auditors to first consider what metrics and standards are needed and then create them. We have them in the food industry. We have them in the automotive industry. We have them in medicine. So we need to create some kind of standard for AI systems that is trusted by the public and that changes the way companies deploy systems.
The danger of drawing up rules quickly is that they are not the right ones: the rules are too restrictive or too vague. Given the dynamic nature of AI, we need dynamic regulation. Only standards can create a safer environment. We need to take the time to test them so we can get a better understanding of AI systems and then create the regulations we need.
We must encourage responsible innovation that benefits humanity
Mark Abbott, Director of the Tech Stewardship Program at MaRS, which helps individuals and organizations develop ways to shape technology for the benefit of all.
In all these dialogues around generative AI, people are calling for pause, they’re calling for regulation. That’s great, but fundamentally we need to catch up on our broad stewardship capacity. As a society, we have strong muscles when it comes to developing and scaling technology, but we have weak muscles when it comes to handling it responsibly. And that’s a big problem.
The idea of bringing diverse voices together to manage technology is a Canadian-born concept created by hundreds of industry, academia, government, non-profit and professional association leaders. They have come together to see what it takes to ensure we develop technology that is more purposeful, responsible, inclusive and regenerative.
The most appropriate metaphor is the environmental movement. It’s like waking up to the nature of our relationship with technology. As in the environmental movement, it’s not one policy, it’s not one group, it’s not just engineers. And that means that each of us has a role, companies have a role, governments have a role. Everyone needs to start showing more stewardship.
The trick is to understand the technology in terms of the effects and the values at play. Then you can make better value-based decisions. And you actually apply that in your daily life. This is especially important for those who play a direct role in creating, scaling and regulating technology. As tech stewards, we want to make sure that AI and other technologies shape the world we want to see – not create one of the dystopian scenarios we see when we go to the movies.
Aidan Gomez, CEO of Deval Pandya and Cohere will discuss how and if we can safely use these new AI models during a special MaRS Morning, networking session and talk on June 22. More information here.
disclaimer This content was produced as part of a partnership and therefore may not meet the standards of impartial or independent journalism.