Google Creates ‘Dreamfusion’ AI Tool That Produces Great 3D Models From Text
By Nicole Rodrigues, 03 Oct 2022
Last year, Google introduced its Dream Fields AI system that could turn your thoughts into 3D objects. Now, to knock its predecessor out of the ring with a new take on the AI art movement is Dreamfusion.
Dreamfusion is an updated version of the company’s previously announced ‘Dream Fields’ in 2021 which also could take text and churn out 3D renderings. Ben Poole, a co-author of its proof-of-concept paper, announced the new version on Twitter.
Happy to announce DreamFusion, our new method for Text-to-3D!https://t.co/4xI2VHcoQW
— Ben Poole (@poolio) September 29, 2022
We optimize a NeRF from scratch using a pretrained text-to-image diffusion model. No 3D data needed!
Joint work w/ the incredible team of @BenMildenhall @ajayj_ @jon_barron#dreamfusion pic.twitter.com/YeG0zaFxuu
One key difference between both models is that Dreamfusion is powered by Google’s in-house Imagen technology. The system creates 3D visions via a neural network of 2D partial datasets known as the Neural Radiance Field (NeRF).
In contrast to the old Dream Fields, the new model will be able to re-light shots and provide greater depth and higher quality. It also allows multiple images to be placed and collated together in one scene.
On the website are images of animals doing different activities, from wearing crowns to taking selfies. The 3D pictures show great depth and visualization, even if they slightly miss that realistic mark.
While Dreamfusion cannot currently produce hyperrealistic images like OpenAI’s DALL-E 2 can, its paper indicates that it has all the foundations to be able to do so one day.
However, as it’s still in its early phases, many of the sample renders on the Dreamfusion website may look a little off or unfinished. Still, if this is just what the first version can do, there’s no telling how far it could eventually be taken in the 3D modeling and video game industries once the system is fully realized.
With that, there is currently no set date for its arrival in the AI art sphere.
[via Futurism and The Decoder, cover image via Dreamfusion]