It’s All Fun And Games Until AI Identifies Your Selfies With Racial Slurs
By Thanussha Priyah, 18 Sep 2019
Image via Shutterstock.com
Twitter was flooded this week with people posting selfies complete with specific captions tied to their faces. These images were generated by ‘ImageNet Roulette’, a website that utilizes artificial intelligence to label a person in an image according to rooted biases.
The website was curated by professor and researcher Kate Crawford and artist Trevor Paglen, who both set out on the experiment to evaluate the risk of using datasets with innate biases to train AI.
The AI was fed with photos from image database ImageNet, which is filled with over 14 million labeled photos uploaded since 2009. The inventors of ‘ImageNet Roulette’ trained the AI on 2,833 sub-categories of the keyword “person.”
The site is free for all to use, which inspired many to try out the tool for themselves.
It was all fun when people found they could pass off as various roles like “pilot,” “enchantress” or “head nurse” with their selfies. Some were even tagged with compliments such as “stunner” and “beauty.”
But the AI system was set out to be “problematic,” and it played its part as users on the other end received controversial labels, including racial terms and criminal charges like “first offender.”
When you visit the website, it comes with a disclaimer that states, “‘ImageNet Roulette’ regularly classifies people in dubious and cruel ways.”
The whole idea behind the tool is to “demonstrate how various kinds of politics propagate through technical systems,” and the fact that the creators behind the systems are not even aware of it.
The inventors formulated the project to showcase what could happen if data used to train AI algorithms are inherently biased—or just brutal.
Crawford and Paglen explain on their site, “ImageNet contains a number of problematic, offensive and bizarre categories—all drawn from WordNet. Some use misogynistic or racist terminology.”
Crawford also tweeted that the ImageNet database is a “major achievement” for AI as it exposes the “deep problems with classifying humans.”
This project is currently displayed an ongoing exhibition titled Training Humans at the Prada Foundation in Milan.
Want to see how an AI trained on ImageNet will classify you? Try ImageNet Roulette, based on ImageNet's Person classes. It's part of the 'Training Humans' exhibition by @trevorpaglen & me - on the history & politics of training sets. Full project out soonhttps://t.co/XWaVxx8DMC pic.twitter.com/paAywgpEo4— Kate Crawford (@katecrawford) September 16, 2019
ImageNet is one of the most significant training sets in the history of AI. A major achievement. The labels come from WordNet, the images were scraped from search engines. The 'Person' category was rarely used or talked about. But it's strange, fascinating, and often offensive.— Kate Crawford (@katecrawford) September 16, 2019
It reveals the deep problems with classifying humans - be it race, gender, emotions or characteristics. It's politics all the way down, and there's no simple way to 'debias' it.— Kate Crawford (@katecrawford) September 16, 2019
No matter what kind of image I upload, ImageNet Roulette, which categorizes people based on an AI that knows 2500 tags, only sees me as Black, Black African, Negroid or Negro.— Lil Uzi Hurt (@lostblackboy) September 18, 2019
Some of the other possible tags, for example, are “Doctor,” “Parent” or “Handsome.” pic.twitter.com/wkjHPzl3kP
Everyone is a criminal to this AI. pic.twitter.com/pSdgwWXSUW— Brian McNett (@b_mcnett) September 18, 2019
Fascinating insight into the classification system and categories used by Stanford and Princeton, in the software that acts as the baseline for most image identification algorithms. pic.twitter.com/QWGvVhMcE4— Stephen Bush (@stephenkb) September 16, 2019
[via Business Insider, cover image via Shutterstock.com]
More related news