Politics

ChatGPT now lets users create fake images of politicians. We stress-tested it

Recent updates to ChatGPT have raised concerns about the ease of creating fake images of real politicians, as reported by CBC News. While manipulating images of real people without their consent goes against OpenAI’s rules, the company has recently allowed more leeway with public figures, albeit with specific limitations in place. However, CBC’s visual investigations unit found that prompts could be structured in a way to circumvent some of these restrictions.

For instance, reporters were able to generate fake images of Liberal Leader Mark Carney and Conservative Leader Pierre Pollievre appearing in compromising scenarios with criminal and controversial political figures. Aengus Bridgman, an assistant professor at McGill University and director of the Media Ecosystem Observatory, highlighted the risks associated with the proliferation of fake images online, especially during election periods.

“This is the first election where generative AI has been so prevalent and competent in producing human-like content. Many individuals are using it to create content that is misleading and could potentially influence opinions and behaviors,” Bridgman stated. While there hasn’t been widespread evidence of AI-generated images swaying Canadian voters, the potential danger remains a cause for concern.

OpenAI had previously prohibited ChatGPT from generating images of public figures due to potential misuse during elections. However, a recent update introduced GPT-4o image generation, allowing for the creation of images featuring public figures. OpenAI clarified that the intention behind this update was to provide more creative freedom while safeguarding against harmful content like sexually explicit deepfakes. Public figures have the option to opt out, and users can report inappropriate content.

See also  Trudeau government unveils plans to cut $500 million in spending

Notably, ChatGPT’s responses sometimes inadvertently suggested workarounds to evade restrictions. While straightforward requests violating OpenAI’s terms were rejected, rephrasing prompts could lead to the generation of problematic images. This loophole raises concerns about the potential for political disinformation and misinformation through AI-generated content.

Despite the introduction of guardrails and the C2PA indicator for transparency, concerns remain about the misuse of AI-generated images. Gary Marcus, a cognitive scientist, emphasized the challenges of implementing foolproof guardrails in AI systems, noting the difficulty in preventing malicious uses effectively.

As the use of AI technology continues to evolve, it is crucial for platforms like OpenAI to remain vigilant and adapt their policies to mitigate potential risks associated with fake images and disinformation. The balance between creative freedom and responsible usage of AI tools remains a critical issue in the digital landscape, especially during sensitive periods like elections.

Related Articles

Leave a Reply

Back to top button