Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Text2VDM: Text to Vector Displacement Maps for Expressive and Interactive 3D Sculpting

Created by
  • Haebom

Author

Hengyu Meng, Duotun Wang, Zhijing Shao, Ligang Liu, Zeyu Wang

Outline

This paper presents a novel framework, Text2VDM. Text2VDM uses score distillation sampling (SDS) to generate text as vector displacement map (VDM) brushes by deforming dense planar meshes. Existing SDS approaches focus on generating entire objects, but struggle with generating sub-object structures, such as brush generation. We define this as a "semantic combining" problem and address it by introducing weighted mixing of prompt tokens into SDS. Consequently, we demonstrate that diverse and high-quality VDM brushes can be generated, demonstrating their applicability in diverse applications, such as mesh styling and real-time interactive modeling.

Takeaways, Limitations

Takeaways:
Ability to create diverse and high-quality VDM brushes based on text
Solving the problem of creating sub-object structures (semantic binding), which was a limitation of existing models.
Can be used in a variety of applications, including mesh styling and real-time interactive modeling.
Seamless integration with major modeling software
Limitations:
Further research may be needed to evaluate the generalization performance and robustness of the method presented in this paper to various text inputs.
Development of objective evaluation metrics for the quality and variety of generated brushes may be necessary.
Additional validation may be required for full compatibility with the workflow of actual artists.
👍