/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
BRIDGE - Building Reinforcement-Learning Depth-to-Image Data Generation Engine for Monocular Depth Estimation
VSSFlow: Unifying Video-conditioned Sound and Speech Generation via Joint Learning
UI-UG: A Unified MLLM for UI Understanding and Generation
Q-Mirror: Unlocking the Multi-Modal Potential of Scientific Text-Only QA Pairs
Conda: Column-Normalized Adam for Training Large Language Models Faster
TENET: Leveraging Tests Beyond Validation for Code Generation
FameMind: Frame-Interleaved Video Reasoning via Reinforcement Learning
Explore-Execute Chain: Towards an Efficient Structured Reasoning Paradigm
Sequence Pathfinder for Multi-Agent Pickup and Delivery in the Warehouse
MMPB: It's Time for Multi-Modal Personalization
Painless Activation Steering: An Automated, Lightweight Approach for Post-Training Large Language Models
A Meta-Analysis of LLM Effects on Students across Qualification, Socialisation, and Subjectification
Wavelet-Induced Rotary Encodings: RoPE Meets Graphs
Backdoor Attribution: Elucidating and Controlling Backdoor in Language Models
Provable Scaling Laws of Feature Emergence from Learning Dynamics of Grokking
Predicting LLM Reasoning Performance with Small Proxy Model
Beyond the Individual: Introducing Group Intention Forecasting with SHOT Dataset
Adversarial Defense in Cybersecurity: A Systematic Review of GANs for Threat Detection and Mitigation
Video models are zero-shot learners and reasoners
Beyond Sharp Minima: Robust LLM Unlearning via Feedback-Guided Multi-Point Optimization
U-Mamba2-SSL for Semi-Supervised Tooth and Pulp Segmentation in CBCT
Graph Coloring for Multi-Task Learning
KANO: Kolmogorov-Arnold Neural Operator
Robust LLM Training Infrastructure at ByteDance
Communications to Circulations: 3D Wind Field Retrieval and Real-Time Prediction Using 5G GNSS Signals and Deep Learning
FlowRL: Matching Reward Distributions for LLM Reasoning
DreamControl: Human-Inspired Whole-Body Humanoid Control for Scene Interaction via Guided Diffusion
Multi-Robot Task Planning for Multi-Object Retrieval Tasks with Distributed On-Site Knowledge via Large Language Models
U-Mamba2: Scaling State Space Models for Dental Anatomy Segmentation in CBCT
MindVL: Towards Efficient and Effective Training of Multimodal Large Language Models on Ascend NPUs
Inducing Uncertainty on Open-Weight Models for Test-Time Privacy in Image Recognition
Ban&Pick: Ehancing Performance and Efficiency of MoE-LLMs via Smarter Routing
LiDAR-BIND-T: Improved and Temporally Consistent Sensor Modality Translation and Fusion for Robotic Applications
Long-Horizon Visual Imitation Learning via Plan and Code Reflection
Measuring the Measures: Discriminative Capacity of Representational Similarity Metrics Across Model Families
Learning to Generate Unit Test via Adversarial Reinforcement Learning
Diffusion Language Models Know the Answer Before Decoding
Object Detection with Multimodal Large Vision-Language Models: An In-depth Review
Image-Conditioned 3D Gaussian Splat Quantization
The DNA of nuclear models: How AI predicts nuclear masses
FoundBioNet: A Foundation-Based Model for IDH Genotyping of Glioma from Multi-Parametric MRI
Learning Unified User Quantized Tokenizers for User Representation
A Survey on Code Generation with LLM-based Agents
The Ever-Evolving Science Exam
The Impact of Language Mixing on Bilingual LLM Reasoning
Mind the Gap: A Review of Arabic Post-Training Datasets and Their Limitations
Linguistic and Embedding-Based Profiling of Texts generated by Humans and Large Language Models
QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation
CADmium: Fine-Tuning Code Language Models for Text-Driven Sequential CAD Design
Scaling RL to Long Videos
On the Effectiveness of Methods and Metrics for Explainable AI in Remote Sensing Image Scene Classification
Reinforcement Fine-Tuning Naturally Mitigates Forgetting in Continual Post-Training
HumanVideo-MME: Benchmarking MLLMs for Human-Centric Video Understanding
LATTE: Latent Trajectory Embedding for Diffusion-Generated Image Detection
Deep Graph Learning for Industrial Carbon Emission Analysis and Policy Impact
DNN-Based Precoding in RIS-Aided mmWave MIMO Systems With Practical Phase Shift
SoMi-ToM: Evaluating Multi-Perspective Theory of Mind in Embodied Social Interactions
When Does Multimodality Lead to Better Time Series Forecasting?
FeDa4Fair: Client-Level Federated Datasets for Fairness Evaluation
Decoupled Classifier-Free Guidance for Counterfactual Diffusion Models
QGuard:Question-based Zero-shot Guard for Multi-modal LLM Safety
VITA: Zero-Shot Value Functions via Test-Time Adaptation of Vision-Language Models
A theoretical framework for self-supervised contrastive learning for continuous dependent data
Efficient Context Selection for Long-Context QA: No Tuning, No Iteration, Just Adaptive-$k$
Resisting Contextual Interference in RAG via Parametric-Knowledge Reinforcement
Static Word Embeddings for Sentence Semantic Representation
Negative-Guided Subject Fidelity Optimization for Zero-Shot Subject-Driven Generation
Multi Layered Autonomy and AI Ecologies in Robotic Art Installations
WorldGym: World Model as An Environment for Policy Evaluation
Personalized Subgraph Federated Learning with Differentiable Auxiliary Projections
ViewSpatial-Bench: Evaluating Multi-perspective Spatial Localization in Vision-Language Models
Finite Sample Analysis of Linear Temporal Difference Learning with Arbitrary Features
SelfReflect: Can LLMs Communicate Their Internal Answer Distribution?
Value-Guided Search for Efficient Chain-of-Thought Reasoning
LLM Agents for Interactive Exploration of Historical Cadastre Data: Framework and Application to Venice
Find the Fruit: Zero-Shot Sim2Real RL for Occlusion-Aware Plant Manipulation
AudioTrust: Benchmarking the Multifaceted Trustworthiness of Audio Large Language Models
Causal Interventions Reveal Shared Structure Across English Filler-Gap Constructions
DEBATE, TRAIN, EVOLVE: Self Evolution of Language Model Reasoning
Octic Vision Transformers: Quicker ViTs Through Equivariance
Silent Leaks: Implicit Knowledge Extraction Attack on RAG Systems through Benign Queries
ELEPHANT: Measuring and understanding social sycophancy in LLMs
Structured Agent Distillation for Large Language Model
ScSiameseClu: A Siamese Clustering Framework for Interpreting single-cell RNA Sequencing Data
DisCO: Reinforcing Large Reasoning Models with Discriminative Constrained Optimization
Modeling Saliency Dataset Bias
TensorRL-QAS: Reinforcement learning with tensor networks for improved quantum architecture search
Scalable LLM Math Reasoning Acceleration with Low-rank Distillation
Simple yet Effective Semi-supervised Knowledge Distillation from Vision-Language Models via Dual-Head Optimization
Stochastic Layer-wise Learning: Scalable and Efficient Alternative to Backpropagation
Fair Uncertainty Quantification for Depression Prediction
Adaptive Rectification Sampling for Test-Time Compute Scaling
Lobster: A GPU-Accelerated Framework for Neurosymbolic Programming
Enabling Rapid Shared Human-AI Mental Model Alignment via the After-Action Review
CODA: Repurposing Continuous VAEs for Discrete Tokenization
Value Profiles for Encoding Human Variation
FW-Merging: Scaling Model Merging with Frank-Wolfe Optimization
A Survey on SAR ship classification using Deep Learning
Revisiting semi-supervised learning in the era of foundation models
Rethinking Diffusion Model in High Dimension
Load more
ViewSpatial-Bench: Evaluating Multi-perspective Spatial Localization in Vision-Language Models
Created by
Haebom
作者
Dingming Li, Hongxing Li, Zixuan Wang, Yuchen Yan, Hang Zhang, Siqi Chen, Guiyang Hou, Shengpei Jiang, Wenqi Zhang, Yongliang Shen, Weiming Lu, Yueting Zhuang
ViewSpatial-Bench: Multi-Viewpoint Spatial Localization Recognition Evaluation
概要
視覚言語モデル(VLM)は視覚コンテンツの理解と推論能力を示したが、交差点の理解と空間的推論が必要な作業には困難がある。現在、VLMは主にカメラ視点の磁気中心空間推論に優れているが、他の個体の空間基準枠を適用しなければならない場合、打者中心視点に一般化することは失敗する。 ViewSpatial-Benchは、多視点空間ローカライゼーション認識評価用に設計された最初の包括的なベンチマークで、5種類の作業を含み、正確な方向ラベルを生成する自動3D注釈パイプラインをサポートします。さまざまなVLMのViewSpatial-Benchの包括的な評価は、かなりのパフォーマンスギャップを示しています。モデルはカメラの視点作業で合理的な性能を示していますが、人間の視点から推定すると精度が低下します。多視点空間データセットでVLMを微調整して、全作業で46.24%のパフォーマンス向上を達成しました。これは、3D空間関係モデリングがVLMの空間理解能力を向上させるという証拠を提供する。
Takeaways、Limitations
•
現在、VLMは自己中心的(カメラ視点)空間推論に強いが、打者中心的視点では一般化に失敗する。
•
ViewSpatial-Benchは、多視点空間ローカライゼーション認識評価のための最初の包括的なベンチマークです。
•
3D空間関係のモデリングは、VLMの空間理解能力を向上させる。
•
VLMの微調整により、全体的なパフォーマンスを向上させることができます。
•
この研究は空間知能の重要なベンチマークを提供します。
PDFを見る
Made with Slashpage