Easy ‘DragGAN’ Method Lets You Edit AI-Generated Images Via Simple Click & Drag
By Mikelle Leow, 22 May 2023

AI art generators may be revolutionary right now, but they barely scratch the surface of what the future of image creation could hold. Recently, there has been a slew of editing tools that let you alter text-to-image art with, well, text again. Those might be great for people who are driven by language, but what about the others, including the visually inclined who have years of experience in working with digital creativity tools?
Working much more intuitively than the pen tool in software, DragGAN is an astonishingly easy technique developed by researchers at the Max Planck Institute for Computer Science, the Saarbrücken Research Center for Visual Computing, MIT CSAIL, University of Pennsylvania, and Google AR/VR that enables users to manipulate AI-generated images by simply clicking and dragging them.


As outlined in a new paper and a set of videos, the team demonstrates that editing photorealistic generative artwork can be as mindless as clicking an image to create target points, and then dragging them with your mouse to precisely alter its perspective or details. The method not only makes it easy to create multiple variations of one image, but it’s also less clumsy than entering text in a prompt box and crossing your fingers that the AI understands your vision.


Thus far, DragGAN can perform manipulations for categories like animals, humans, cars, and landscapes, and it uses a Generative Adversarial Network (GAN) to connect the dots.
𤯠Woah! This new AI paper is legit like Adobe Puppet Warp on steroids ð
— Bilawal Sidhu (@bilawalsidhu) May 19, 2023
Text prompts aren’t the be-all and end-all of AI creation. DragGAN is a perfect example of giving creators fine-grain control over the AI image generation process.
Here’s the TL;DR summary:
• ð︅ pic.twitter.com/sjoA47hbiT
You could, for instance, click along the eyes of a cat to shut them, or add a point above a model’s mouth to make them smile. You could also adjust clothing and poses using this method.

The tool studies the positioning of inputted points and tweaks images to match them.
“Our approach can hallucinate occluded content, like the teeth inside a lion’s mouth, and can deform following the object’s rigidity, like the bending of a horse leg,” the researchers further explain.

The team adds that the output will be realistic “even for challenging scenarios,” such as “deforming shapes that consistently follow the object’s rigidity.” In other words, DragGAN adapts to the scenario, instead of squishing objects as is common with warp tools.
Accordingly, the concept has been met with so much amazement that people have flocked to the official DragGAN site and crashed it—even though the technology isn’t publicly accessible yet and has only been detailed in a research paper.
Video by Pan et al, featured with permission
The technique has been likened to “Photoshop on steroids,” since it yields comparably desirable results to pro imaging apps without the manual work.

Bilawal Sidhu, an AI creator and a former product manager at Google, says the tool is a sign of generative artificial intelligence’s game-changing possibilities. “Text prompts aren’t the be-all and end-all of AI creation,” he tweets. “DragGAN is a perfect example of giving creators fine-grain control over the AI image generation process.”
— Jacek Jaskólski (@jacek_jaskolski) May 19, 2023
You can find out more about this incredible editing method by checking out the team’s Hugging Face paper.

[via 80 Level and The Decoder, videos and images by Pan et al and featured with permission]