/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
Emotions as Ambiguity-aware Ordinal Representations
From Tabula Rasa to Emergent Abilities: Discovering Robot Skills via Real-World Unsupervised Quality-Diversity
Enhancing Model Privacy in Federated Learning with Random Masking and Quantization
Scaling Laws for Task-Stratified Knowledge in Post-Training Quantized Large Language Models
Principled Detection of Hallucinations in Large Language Models via Multiple Testing
Vocoder-Projected Feature Discriminator
ControlEchoSynth: Boosting Ejection Fraction Estimation Models via Controlled Video Diffusion
Explain Before You Answer: A Survey on Compositional Visual Reasoning
Time-Aware One Step Diffusion Network for Real-World Image Super-Resolution
PediatricsMQA: a Multi-modal Pediatrics Question Answering Benchmark
VideoEraser: Concept Erasure in Text-to-Video Diffusion Models
A Systematic Survey of Model Extraction Attacks and Defenses: State-of-the-Art and Perspectives
GeoSAM2: Unleashing the Power of SAM2 for 3D Part Segmentation
Input-Time Scaling
LinguaSafe: A Comprehensive Multilingual Safety Benchmark for Large Language Models
A Survey on Parallel Text Generation: From Parallel Decoding to Diffusion Language Models
StreetViewAI: Making Street View Accessible Using Context-Aware Multimodal AI
Putnam-AXIOM: A Functional and Static Benchmark for Measuring Higher Level Mathematical Reasoning in LLMs
From Imitation to Optimization: A Comparative Study of Offline Learning for Autonomous Driving
R-Zero: Self-Evolving Reasoning LLM from Zero Data
Human-Centered Human-AI Interaction (HC-HAII): A Human-Centered AI パースペクティブ
GTPO: Trajectory-Based Policy Optimization in Large Language Models
Contrastive Multi-Task Learning with Solvent-Aware Augmentation for Drug Discovery
A Large-Scale Benchmark of Cross-Modal Learning for Histology and Gene Expression in Spatial Transcriptomics
Invisible Architectures of Thought: Toward a New Science of AI as Cognitive Infrastructure
Revisiting Pre-trained Language Models for Vulnerability Detection
MegaScience: Pushing the Frontiers of Post-Training Datasets for Science Reasoning
Scaling Decentralized Learning with FLock
SegQuant: A Semantics-Aware and Generalizable Quantization Framework for Diffusion Models
Apple Intelligence Foundation Language Models: Tech Report 2025
Optimistic Exploration for Risk-Averse Constrained Reinforcement Learning
PyVision: Agentic Vision with Dynamic Tooling
DATABench: Evaluating Dataset Auditing in Deep Learning from an Adversarial Perspective
RoboTwin 2.0: A Scalable Data Generator and Benchmark with Strong Domain Randomization for Robust Bimanual Robotic Manipulation
Analyzing Character Representation in Media Content using Multimodal Foundation Model: Effectiveness and Trust
MEraser: An Effective Fingerprint Erasure Approach for Large Language Models
CoQuIR: A Comprehensive Benchmark for Code Quality-Aware Information Retrieval
DreamActor-H1: High-Fidelity Human-Product Demonstration Video Generation via Motion-designed Diffusion Transformers
Pseudo-Simulation for Autonomous Driving
BinConv: A Neural Architecture for Ordinal Encoding in Time-Series Forecasting
FaceEditTalker: Controllable Talking Head Generation with Facial Attribute Editing
EnvInjection: Environmental Prompt Injection Attack to Multi-modal Web Agents
X-Sim: Cross-Embodiment Learning via Real-to-Sim-to-Real
Heat Diffusion Models - Interpixel Attention Mechanism
Bidirectional Task-Motion Planning Based on Hierarchical Reinforcement Learning for Strategic Confrontation
Multi-Type Context-Aware Conversational Recommender Systems via Mixture-of-Experts
Pricing AI Model Accuracy
Evaluating the Fitness of Ontologies for the Task of Question Generation
Utility-Focused LLM Annotation for Retrieval and Retrieval-Augmented Generation
PGAD: Prototype-Guided Adaptive Distillation for Multi-Modal Learning in AD Diagnosis
Constructing a Norm for Children's Scientific Drawing: Distribution Features Based on Semantic Similarity of Large Language Models
An Empirical Risk Minimization Approach for Offline Inverse RL and Dynamic Discrete Choice Model
Efficient PINNs via Multi-Head Unimodular Regularization of the Solutions Space
Statistical learning does not always entail knowledge
Score-based Generative Diffusion Models for Social Recommendations
PromptKeeper: Safeguarding System Prompts for LLMs
X-Prompt: Towards Universal In-Context Image Generation in Auto-Regressive Vision Language Foundation Models
Understanding Fairness-Accuracy Trade-offs in Machine Learning Models: Does Promoting Fairness Undermine Performance?
GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language モデル
Leveraging Multi-facet Paths for Heterogeneous Graph Representation Learning
Training with Explanations Alone: A New Paradigm to Prevent Shortcut Learning
Generation of Geodesics with Actor-Critic Reinforcement Learning to Predict Midpoints
TabSketchFM: Sketch-based Tabular Representation Learning for Data Discovery over Data Lakes
HoneyBee: A Scalable Modular Framework for Creating Multimodal Oncology Datasets with Foundational Embedding Models
StepWiser: Stepwise Generative Judges for Wiser Reasoning
AniME: Adaptive Multi-Agent Planning for Long Animation Generation
AppAgent-Pro: A Proactive GUI Agent System for Multidomain Information Integration and User Assistance
AI Chaperones Are (Really) All You Need to Prevent Parasocial Relationships with Chatbots
Nemori: Self-Organizing Agent Memory Inspired by Cognitive Science
General agents contain world models
Approximate Lifted Model Construction
Fitness Landscape of Large Language Model-Assisted Automated Algorithm Search
Synthesizing High-Quality Programming Tasks with LLM-based Expert and Student Agents
Preference Elicitation for Multi-objective Combinatorial Optimization with Active Learning and Maximum Likelihood Estimation
Reference-Aligned Retrieval-Augmented Question Answering over Heterogeneous Proprietary Documents
Demonstrating specification gaming in reasoning models
AirRAG: Autonomous Strategic Planning and Reasoning Steer Retrieval Augmented Generation
Think Smart、Act SMARL! Analyzing Probabilistic Logic Shields for Multi-Agent Reinforcement Learning
From Evidence to Decision: Exploring Evaluative AI
CODA: Coordinating the Cerebrum and Cerebellum for a Dual-Brain Computer Use Agent with Decoupled Reinforcement Learning
Discrete-Guided Diffusion for Scalable and Safe Multi-Robot Motion Planning
Patch Progression Masked Autoencoder with Fusion CNN Network for Classifying Evolution Between Two Pairs of 2D OCT Slices
DeepScholar-Bench: A Live Benchmark and Automated Evaluation for Generative Research Synthesis
Large Language Models (LLMs) for Electronic Design Automation (EDA)
Symphony: A Decentralized Multi-Agent Framework for Scalable Collective Intelligence
HPC Digital Twins for Evaluating Scheduling Policies, Incentive Structures and their Impact on Power and Cooling
Decomposing Behavioral Phase Transitions in LLMs: Order Parameters for Emergent Misalignment
Cross-Platform E-Commerce Product Categorization and Recategorization: A Multimodal Hierarchical Classification Approach
Linear-Time Demonstration Selection for In-Context Learning via Gradient Estimation
MathBuddy: A Multimodal System for Affective Math Tutoring
Diffusion Language Models Know the Answer Before Decoding
GLSim: Detecting Object Hallucinations in LVLMs via Global-Local Similarity
Dhati+: Fine-tuned Large Language Models for Arabic Subjectivity Evaluation
WaveHiT-SR: Hierarchical Wavelet Network for Efficient Image Super-Resolution
The Next Layer: Augmenting Foundation Models with Structure-Preserving and Attention-Guided Learning for Local Patches to Global Context Awareness in Computational Pathology
Logical Reasoning with Outcome Reward Models for Test-Time Scaling
The Information Dynamics of Generative Diffusion
AI-Powered Detection of Inappropriate Language in Medical School Curricula
Generative AI for Testing of Autonomous Driving Systems: A Survey
Multispectral LiDAR data for extracting tree points in urban and suburban areas
Load more
LL3M: Large Language 3D Modelers
Created by
Haebom
作者
Sining Lu, Guan Chen, Nam Anh Dinh, Itai Lang, Ari Holtzman, Rana Hanocka
概要
LL3Mは、事前に訓練された巨大言語モデル(LLM)を活用して、Blenderで解釈可能なPythonコードを作成することによって3Dアセットを生成するマルチエージェントシステムです。従来の3Dデータセットから学習する生成的アプローチとは異なり、フォーム作成をコード作成タスクに再構成して、モジュール性、編集の容易さ、アーティストワークフローとの統合を強化します。テキストプロンプトが表示されると、LL3MはプロフェッショナルなLLMエージェントチームを調整して、Blenderスクリプトを計画、検索、作成、デバッグ、および改善してジオメトリと外観を作成および編集します。生成されたコードは、シーンとオブジェクトの高レベル、解釈可能で、人間が読めるように、よく文書化された表現で機能し、さまざまな無制限の形状、材料、シーンに精巧なBlenderコンポーネント(Bメッシュ、ジオメトリ修飾子、シェーダーノードなど)を最大限に活用します。このコードは、コード調整または手続き型パラメータを介して追加のエージェントと人間の編集と実験のための多くの方法を提供します。このメディアは、システム内の共同創作ループを自然に可能にします。エージェントはコードと視覚資料を使用して自動的に自己批判を実行することができ、反復的なユーザーガイドラインは資産を改善するための直感的な方法を提供します。エージェント間の共有コードコンテキストは以前の試みに対する認識を可能にし、Blender APIドキュメントで構築された検索拡張生成の知識ベースであるBlenderRAGは、高度なモデリングタスクとコード精度を強化する例、タイプ、および関数をエージェントに提供します。さまざまなフォームカテゴリ、スタイルと材料の編集、ユーザー主導の改善にわたってLL3Mの効果を示しています。実験は、コードが3D資産を生成するための生成的で解釈可能な媒体としての力を示しています。プロジェクトページは
https://threedle.github.io/ll3m입니다
。
Takeaways、Limitations
•
Takeaways:
◦
3Dアセットを生成するための新しいパラダイムを提示します。
◦
さまざまなフォーム、スタイル、マテリアルサポート:Blenderのさまざまな機能を活用して、複雑で多様な3Dモデルを作成できます。
◦
ユーザーとのコラボレーション生成プロセスのサポート:コードベースの反復的な修正と改善が可能。
◦
高品質、解釈可能なコード生成: 生成されたコードは人が理解し修正することができ、使いやすさを高める。
•
Limitations:
◦
LLMとBlender APIへの依存性:LLMとBlender APIのパフォーマンスと制限の影響を受けます。
◦
複雑なモデルを作成するとパフォーマンスが低下する可能性:複雑な3Dモデルを作成するにはより多くの時間とリソースが必要です。
◦
コードのデバッグとエラー処理の難しさ:生成されたコードのバグ修正とエラー処理に追加の努力が必要です。
◦
Blenderの専門知識の必要性:生成されたコードを理解して修正するには、Blenderに関するある程度の知識が必要です。
PDFを見る
Made with Slashpage