Combatting Distrust of AI-Generated Images

By: Adrienne Viarengo

Photographs and videos are often used as proof that an event occurred in a certain way, and images or videos accompanying news seemed to be innately more trustworthy. Now, artificial intelligence’s (“AI”) capability to generate false media jeopardizes reliance on photos and videos to corroborate news stories. Without some process of labeling and identifying AI-generated media, distrust will permeate even the most highly regarded news sources.

 

One recent example involved an AI-generated image of the Catholic Pope wearing a particularly fashionable puffy coat which made its rounds on social media, with some thinking it was a real photo. While the Pope image was relatively harmless, the viral spread of the disinformation was alarming. More harmful fake AI-generated images also circulated recently, including a photo of an explosion at the Pentagon. In response to initial concern over this false image and report, major stock market indices briefly dipped due to national security and economic concerns.

Further exacerbating the problem, is that on X, formerly Twitter, many of the accounts that spread the fake imagery were verified with blue check marks. Formerly, a blue check signified some level of verification of identity at least, now blue checks are granted upon a paid subscription. This has resulted in convincing impersonation accounts — accounts that pose as real news sites, complete with a blue check that many users still place trust in.

 

In July 2023, the Biden Administration announced that Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAIagreed to self-regulate their AI tools and the content generated from them. The commitments included transparency measures identifying AI-generated content and other mitigating efforts such as information-sharing on risks, development of public policy tools. Further regulation of AI may be in the pipeline, as Senate majority leader, Chuck Schumer (D-NY) promised a comprehensive bill on AI in the next year or so.  

 

In February 2024, Meta declared that it would begin to identify and subsequently label any images which it detects are generated by artificial intelligence tools. This is regardless of whether they are created using Meta AI or an alternate AI tools and working in conjunction with other platforms such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock who aim to do the same. Meta will do this by incorporating visible labels, invisible watermarks and metadata into the image files. As Meta owns Instagram, Facebook, and Threads, all three platforms will comply with the new labeling effort. While this is a voluntary effort on Meta’s part, Meta acknowledges that it is not technically feasible to identify all AI-generated content and will continue to develop classifiers for images which lack visible and invisible markers. Meta also acknowledges that bad actors will continuously work to circumvent efforts to label imagery created: “[p]eople and organizations that actively want to deceive people with AI-generated content will look for ways around safeguards that are put in place to detect it. Across our industry and society more generally, we’ll need to keep looking for ways to stay one step ahead.” OpenAI also recognizes that metadata is “not a silver bullet” for establishing authenticity because it can easily be removed.

 

Additionally, Meta is requiring that users disclose when they post “a photorealistic video or realistic-sounding audio that was digitally created or altered.” This is similar to requirements Meta already has in place for political advertisements that contain AI-generated media and disclosure requirements that TikTik and YouTube already have in place.

 

But how to spot AI-generated imagery if it lacks a label? There are usually tells and experts implore social media users to slow down and do a bit of detective work before sharing content. AI tools are still new, and they’re not perfect – despite how convincing they can be. AI software seems to have a difficult time replicating hands, sometimes adding extra fingers or some disfigurement. If there is any writing in the background of an image, take a look to see if it is backward or otherwise manipulated. Backgrounds of AI generated images can also have blurred sections or distortions. And finally, if an image or post claims to be a newsworthy event, do a separate search to find out if news sources you generally trust are corroborating the news or utilize fact-checking sites like the Washington Post’s Fact Checker, Snopes and PolitiFact.

 

Student Bio: Adrienne Viarengo is a second-year J.D. candidate at Suffolk University Law School and is a staff writer for the Journal of High Technology Law. Adrienne has extensive experience working in Democratic politics and government, both in the U.S. Senate and on national and statewide campaigns. She received a Bachelor of Arts Degree in Political Science with a History minor and concentration in International Relations and a Bachelor of Arts Degree in English at the University of Massachusetts Amherst.  

 

Disclamier: The views expressed in this blog are the views of the author alone and do not represent the views of JHTL or Suffolk University Law School.  

Print Friendly, PDF & Email