
Left image generated on AI, right illustration 210379453 © Natanael Alfredo Nemanita Ginting | Dreamstime.com
Even Instagram is having a hard time distinguishing what’s humanmade and what’s not. Its recent rollout of Made with AI labels has hit a snag, with photographers and other content creators finding their posts frustratingly flagged with the tag despite using minimal editing software or none at all.
The culprit seems to be overly sensitive detection. Meta, Instagram’s parent company, hasn't publicly disclosed the exact criteria for triggering the ‘Made with AI’ label. However, according to PetaPixel, even minor edits using tools like Adobe Photoshop’s Generative Fill tool can trigger the tag. These tools use artificial intelligence-powered algorithms to fill in missing areas of an image, but the effect can be similar to more traditional cloning or content-aware fill techniques.
The inconsistency is causing headaches for creators. Photos with basic blemish removal or background adjustments might be labeled ‘Made with AI’, while others with more extensive edits using non-AI tools like Spot Healing Brush Tool, Content-Aware Fill, or the Clone Stamp Tool might slip through the cracks, the report finds. A lack of uniformity undermines the purpose of the label, which is presumably to increase transparency around AI-generated content.
Photographer Peter Yan recently faced this issue when his image of Mount Fuji was tagged as ‘Made with AI’. Yan clarified that his only edits were some minor spot clean-ups using Photoshop, and he did not choose the ‘Made with AI’ option. However, because he used a generative AI tool to remove a trash bin from the photo, Instagram applied the label. For many photographers, this may feel unjust, as removing unwanted elements is a standard practice in the field.
Professional inline skater Julien Cadot also found his video marked with the AI tag, despite being unsure why it was flagged.
There are also concerns about the potential impact on creators’ reputations. The ‘Made with AI’ label might be misconstrued by some viewers, implying that the entire photo is AI-generated rather than a photograph that has undergone some minor AI-assisted editing, which could downplay the maker’s prowess.
Similar comments have popped up on social media platforms like X (formerly Twitter).
Currently, these labels appear only on Instagram’s mobile app, not on the desktop version.
Meta has yet to officially comment on the issue, but some workarounds have emerged. Saving an edited image with the ‘Made with AI’ label onto a blank canvas before re-uploading it seems to bypass the detection system. However, this is a clunky solution that shouldn’t be necessary.
The introduction of these tags is a well-intentioned effort, but its current implementation is flawed. Hopefully, Meta will refine its detection system to ensure an accurate reflection of AI in photo editing without confusing users and undermining creators’ originality.
[via PetaPixel and Android Central, images via various sources]