Twitter Pays User $3,500 For Showing Its AI Favors Slimmer People, Lighter Skin
By Mikelle Leow, 11 Aug 2021
Image via XanderSt / Shutterstock.com
Ask when you need help, and that’s exactly what Twitter did after recognizing that there were some biases lurking in its image-cropping algorithm. Earlier this month, the social network launched its first bounty reward contest inviting researchers and hackers to uncover “potential harms of this algorithm beyond what we identified ourselves.” The winners were announced on Tuesday, August 10.
The top prize, a US$3,500 cash reward, went to Bogdan Kulynych, a PhD student at Switzerland’s EPFL technical university. While Twitter was aware its AI seemed to prefer white over Black faces, and tended to crop the latter out from images, it wanted to discover additional shortcomings. Kulynych certainly delivered that, detailing, “The target model is biased towards deeming more salient the depictions of people that appear slim, young, of light or warm skin color and smooth skin texture, and with stereotypically feminine facial traits.”
In essence, the algorithm seems influenced by long-standing ideals further perpetuated by beauty filters—and that’s not good. “This bias could result in exclusion of minoritized populations and perpetuation of stereotypical beauty standards in thousands of images,” he added.
To conduct his research, Kulynych fed his dataset photos of human faces, along with photorealistic variations created by AI, to measure saliency, or noticeability.
Salience scores generally rose with faces that looked younger and thinner, CNET reported, and skin tones that were lighter, warmer, more saturated and had higher contrast similarly ranked higher.
Images via Bogdan Kulynych / GitHub (MIT License)
“This shows how algorithmic models amplify real-world biases and societal expectations of beauty,” Twitter responded to the findings.
Although unrelated, a study from deep learning and web security expert Vincenzo di Cicco—who won the contest’s ‘Most Innovative’ award—somewhat backs these results up. Instead of human faces, Cicco looked into emojis, and learned that the algorithm favors lighter-skinned emoticons.
Thanks to participating bounty hunters, Twitter also learned that the less-than-perfect image-cropping feature tends to exclude the elderly and disabled, as well as shows Latin or Arabic text in bilingual memes.
While the company expects to improve its algorithm with these findings, it also believes the industry would benefit from the detection of lesser-known catalysts for “algorithmic harms.”
1st place goes to @hiddenmarkov whose submission showcased how applying beauty filters could game the algorithm’s internal scoring model. This shows how algorithmic models amplify real-world biases and societal expectations of beauty.
— Twitter Engineering (@TwitterEng) August 9, 2021
2nd place goes to @halt_ai who found the saliency algorithm perpetuated marginalization. For example, images of the elderly and disabled were further marginalized by cropping them out of photos and reinforcing spatial gaze biases.
— Twitter Engineering (@TwitterEng) August 9, 2021
3rd place goes to @RoyaPak who experimented with Twitter's saliency algorithm using bilingual memes. This entry shows how the algorithm favors cropping Latin scripts over Arabic scripts and what this means in terms of harms to linguistic diversity online.
— Twitter Engineering (@TwitterEng) August 9, 2021
The most innovative award in the algorithmic bias bounty goes to @0xNaN who explored Emoji-based communication to uncover bias in the algorithm, which favored light skin tone Emojis. This entry shows how well-meaning adjustments to photos can result in shifts to image salience.
— Twitter Engineering (@TwitterEng) August 9, 2021
The most generalizable award in the algorithmic bias bounty goes to an anonymous entrant who thanks @USCViterbi for teaching them the fundamentals of image processing. By adding nearly invisible pixels to an image, this entrant was able to alter the algorithm’s preferences.
— Twitter Engineering (@TwitterEng) August 9, 2021
[via CNET, images via various sources]