Politics

Elections watchdog warned AI presents ‘high’ risk in current campaign: internal documents

The use of artificial intelligence (AI) in Canada’s ongoing election campaign has been classified as a “high” risk in an internal briefing note prepared for Canada’s election watchdog. The document, prepared for Commissioner of Canada Elections Caroline Simard, raises concerns about the potential for AI tools to be used in ways that violate election rules.

While the Canada Elections Act does not specifically prohibit the use of AI, bots, or deepfakes, certain provisions under the act could apply if AI tools are used to spread disinformation, publish false information about the electoral process, or impersonate an elections official. The briefing note highlights specific concerns about the use of AI tools and deepfakes, which are hyperrealistic faked videos or audio.

The document points to examples of deepfakes being used in elections abroad and warns that similar incidents could occur in Canada. It also notes an increase in advertising for customized deepfake service offerings on the dark web. The impact of a deepfake can depend on how widely it is circulated, the note says.

Michael Litchfield, director of the AI risk and regulation lab at the University of Victoria, acknowledges the challenges of identifying individuals who use AI to violate election rules. He emphasizes that AI is an amplifier of threats to the electoral process and can make it easy to create content that contravenes the act.

Fenwick McKelvey, an assistant professor of information and communication technology policy at Concordia University, agrees that AI adds a complicated layer to the campaign landscape. He notes that AI tools can generate disinformation faster than it can be debunked, leading to a more challenging media environment.

See also  Autism risk could rise with air pollution, study suggests

Chief Electoral Officer Stéphane Perrault has expressed concerns about AI being used to spread disinformation about the electoral process. He has reached out to social media platforms to seek their support in combatting disinformation from generative AI. However, McKelvey is skeptical about the companies’ commitments, noting that generative AI is something that platforms themselves are promoting.

The internal briefing note indicates that Canada has generally relied on a “self-regulation” approach when it comes to AI, leaving it in the hands of the tech industry. However, the document warns that the effectiveness of self-regulation is contested. Bill C-27, which would regulate some uses of AI, was introduced in the last parliamentary session but did not make it to the legislative deadline.

Overall, the briefing note highlights the potential risks associated with the use of AI in Canada’s election campaign and the challenges of regulating AI to prevent violations of the Canada Elections Act. The document underscores the need for vigilance and proactive measures to address the misuse of AI tools in the electoral process.

Related Articles

Leave a Reply

Back to top button