Press "Enter" to skip to content

Meta Prototype lets Users Build Virtual Worlds by Describing Them

Meta is testing an artificial intelligence system that lets people build parts of virtual worlds by describing them. The CEO Mark Zuckerberg showed off a prototype at a live event today. Proof of the concept, called Builder Bot, could eventually draw more people into Meta Horizon “Metaverse” virtual reality experiences.

This could also advance creative AI tech that powers machine-generated art. Zuckerberg walked viewers through the process of making a virtual space with Builder Bot, starting with commands like “let’s go to the beach,” which prompts the bot to create a cartoonish 3D landscape of sand and water around him.

Later commands range from broad demands like creating an island to extremely specific requests like adding altocumulus clouds and  a model of a hydrofoil. They also include playing sound effects like “tropical music,” which Zuckerberg suggests is coming from a boombox that Builder Bot created, although it could also have been general background audio. The video doesn’t specify whether Builder Bot draws on a limited library of human-created models or if the AI plays a role in generating the designs.

Several AI projects have demonstrated image generation based on text descriptions, including OpenAI’s DALL-E, Nvidia’s GauGAN2, and VQGAN+CLIP, as well as more accessible applications like Dream by Wombo. But these well-known projects involve creating 2D images without interactive components, although some researchers are working on 3D object generation.

Be First to Comment

Leave a Reply

Your email address will not be published.