In the past week, Instagram has been automatically labeling any photo touched by artificial intelligence tools as “Made with AI.” On the surface, this sounds like a win for photographers who dismiss the use of AI as intellectual property theft. But in practice, the way the meta has decided to call AI leaves little room for nuance and debate.
Buckle up, because this is probably a hot take, especially from photojournalists.
Earlier this week, I used my phone and uploaded a photo to Instagram of famous New York City photographer Luis Mendes in Photoville. I did a lot of the usual Photoshop edits that are common in my non-journalistic work, and in the process, I applied a small area to the edge of the frame and asked Generative AI to remove the highlight. Here are the before and after results:
Imagine my surprise when, prominently at the top of the post, Instagram labeled it “Made with AI” upon upload.
Well, no. I laced a small part of the image and asked the AI to edit. This is the edit I could do with the clone tool and a little extra time. The end result would have been almost the same. But one image will be flagged and one will not.
It is in this distinction that the broad brush of “Made with AI”, while noble in its pursuit of truth, fails. The label appears to indicate that the image was created from full cloth with AI when it wasn't at all. The AI was used as a retouching tool, the way a dodge or burn tool would be used, or a clone tool, or a healing brush. To single out the use of a generative AI tool for this label misunderstands how AI was used in this case.
Sure, if DALL-E or Midjourney created this picture of Luis Mendes out of thin air, the “Made with AI” label applies. But I'm not sure that should be the case in this case, as Mr. Mendes was standing there as I write this sentence (snakes and dinosaurs weren't, as I'll get to in a moment).
It can usually have a cooling effect when touched again. Here's another example where the “Made with AI” label wouldn't make sense:
The only AI “crime”, then, is using it to remove the front license plate of the car in this photo. This is something that I would file under the category of retouching work and not completely developing an image with AI. If such editing is demonized, why even have AI tools in the first place?
There are other problems here. One of the commenters on my post asked if using the AI Denoise function in Photoshop would trigger this label. This is something that will look very bad for event shooters using this tool for clients. I tried it, and it seems that using this tool does not apply the label in Instagram. Instagram's help page about the label was cryptic about it, saying it looks at “industry standard signals” to make its determination. I've seen AI noise reduction introduce some sophisticated artifacts and make-up faces into photos, so it's not immune to fabrication either.
Also, the image at the top of this post, which is very clearly AI-generated and labeled as such, receives no such designation when uploaded from a desktop web browser. This is a huge flaw and a pretty uneven application of Instagram's AI policy.
Creators have long adapted new tools to create better art, whether for client work or personal satisfaction. But for a company as big as Meta to appoint itself the arbiter of how the “Made with AI” label is applied is a huge mistake. This paints a scarlet letter on those who are using AI responsibly.
Yes, journalists and truthers should never use such tools to edit their work, but should a wedding photographer be labeled “Made with AI” when uploading wedding photos to social media? Should the bride be sued? Should a company that tweets a photo to cover up a wardrobe malfunction on social media face backlash when the photo is labeled AI?
These are questions that Meta has not fully considered.
But it's something photographers will definitely be considering.
Do you think about the new “Made with AI” label on Instagram? Leave your thoughts in the comments below.