/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
Structure Transfer: an Inference-Based Calculus for the Transformation of Representations
Ensemble of Pathology Foundation Models for MIDOG 2025 Track 2: Atypical Mitosis Classification
AudioCodecBench: A Comprehensive Benchmark for Audio Codec Evaluation
Understanding Space Is Rocket Science - Only Top Reasoning Models Can Solve Spatial Understanding Tasks
DaMoC: Efficiently Selecting the Optimal Large Language Model for Fine-tuning Domain Tasks Based on Data and Model Compression
Modular Techniques for Synthetic Long-Context Data Generation in Language Model Training and Evaluation
EZhouNet:A framework based on graph neural network and anchor interval for the respiratory sound event detection
AImoclips: A Benchmark for Evaluating Emotion Conveyance in Text-to-Music Generation
TimeCopilot
First Order Model-Based RL through Decoupled Backpropagation
Pilot Study on Generative AI and Critical Thinking in Higher Education Classrooms
Beacon: Post-Training Quantization with Integrated Grid Selection
Is Artificial Intelligence Reshaping the Landscape of the International Academic Community of Geosciences?
Vectorized Attention with Learnable Encoding for Quantum Transformer
Transplant Then Regenerate: A New Paradigm for Text Data Augmentation
Depth-Breadth Synergy in RLVR: Unlocking LLM Reasoning Gains with Adaptive Exploration
MultiGen: Child-Friendly Multilingual Speech Generator with LLMs
StreetViewAI: Making Street View Accessible Using Context-Aware Multimodal AI
Street-Level AI: Are Large Language Models Ready for Real-World Judgments?
The KG-ER Conceptual Schema Language
LOTS of Fashion! Multi-Conditioning for Image Generation via Sketch-Text Pairing
Conditional Video Generation for High-Efficiency Video Compression
TriCLIP-3D: A Unified Parameter-Efficient Framework for Tri-Modal 3D Visual Grounding based on CLIP
Demographic-aware fine-grained classification of pediatric wrist fractures
An Analysis of Action-Value Temporal-Difference Methods That Learn State Values
Stochastic Parameter Decomposition
Auto-Regressive vs Flow-Matching: a Comparative Study of Modeling Paradigms for Text-to-Music Generation
MiniCPM4: Ultra-Efficient LLMs on End Devices
Evaluating the Efficacy of LLM-Based Reasoning for Multiobjective HPC Job Scheduling
How Can I Publish My LLM Benchmark Without Giving the True Answers Away?
Optimization of Module Transferability in Single Image Super-Resolution: Universality Assessment and Cycle Residual Blocks
Transferable Mask Transformer: Cross-domain Semantic Segmentation with Region-adaptive Transferability Estimation
RBT4DNN: Requirements-based Testing of Neural Networks
Robust Offline Imitation Learning Through State-level Trajectory Stitching
Beyond holography: the entropic quantum gravity foundations of image processing
KNighter: Transforming Static Analysis with LLM-Synthesized Checkers
FRIDA to the Rescue! Analyzing Synthetic Data Effectiveness in Object-Based Common Sense Reasoning for Disaster Response
CoDiff: Conditional Diffusion Model for Collaborative 3D Object Detection
Rapid Word Learning Through Meta In-Context Learning
Image Embedding Sampling Method for Diverse Captioning
Is an Ultra Large Natural Image-Based Foundation Model Superior to a Retina-Specific Model for Detecting Ocular and Systemic Diseases?
Extended Histogram-based Outlier Score (EHBOS)
A Survey of Graph Retrieval-Augmented Generation for Customized Large Language Models
Breaking the Context Bottleneck on Long Time Series Forecasting
Defending LVLMs Against Vision Attacks through Partial-Perception Supervision
ACING: Actor-Critic for Instruction Learning in Black-Box LLMs
Kolb-Based Experiential Learning for Generalist Agents with Human-Level Kaggle Data Science Performance
Quantifying Calibration Error in Neural Networks Through Evidence-Based Theory
Robust training of implicit generative models for multivariate and heavy-tailed distributions with an invariant statistical loss
Learning from 10 Demos: Generalisable and Sample-Efficient Policy Learning with Oriented Affordance Frames
AutoPETIII:The Tracer Frontier。 What Frontier?
Long Input Sequence Network for Long Time Series Forecasting
FFHFlow: Diverse and Uncertainty-Aware Dexterous Grasp Generation via Flow Variational Inference
Unisolver: PDE-Conditional Transformers Towards Universal Neural PDE Solvers
MTP: A Meaning-Typed Language Abstraction for AI-Integrated Programming
Diffusion on language model encodings for protein sequence generation
Style Transfer to Calvin and Hobbes comics using Stable Diffusion
Autonomation, Not Automation: Activities and Needs of European Fact-checkers as a Basis for Designing Human-Centered AI Systems
Plan Verification for LLM-Based Embodied Task Completion Agents
EigenBench: A Comparative Behavioral Measure of Value Alignment
Oyster-I:Beyond Refusal - Constructive Safety Alignment for Responsible Language Models
Extending FKG.in: Towards a Food Claim Traceability Network
DeepVIS: Bridging Natural Language and Data Visualization Through Step-wise Reasoning
Theory of Mind Using Active Inference: A Framework for Multi-Agent Cooperation
CP-Bench: Evaluating Large Language Models for Constraint Modelling
Axiomatics of Restricted Choices by Linear Orders of Sets with Minimum as Fallback
DMN-Guided Prompting: A Framework for Controlling LLM Behavior
Computational Basis of LLM's Decision Making in Social Simulation
Science Across Languages: Assessing LLM Multilingual Translation of Scientific Papers
Enhancing FKG.in: automating Indian food composition analysis
WASP: A Weight-Space Approach to Detecting Learned Spuriousness
Transferable Belief Model on Quantum Circuits
PIN: A Knowledge-Intensive Dataset for Paired and Interleaved Multimodal Documents
(Ir)rationality in AI: State of the Art, Research Challenges and Open Questions
Intelligence Primer
ChronoGraph: A Real-World Graph-Based Multivariate Time Series Dataset
Delta Activations: A Representation for Finetuned Large Language Models
DEXOP: A Device for Robotic Transfer of Dexterous Human Manipulation
Towards a Unified View of Large Language Model Post-Training
No Thoughts Just AI: Biased LLM Recommendations Limit Human Agency in Resume Screening
IPA: An Information-Preserving Input Projection Framework for Efficient Foundation Model Adaptation
SSGaussian: Semantic-Aware and Structure-Preserving 3D Style Transfer
Parking Availability Prediction via Fusing Multi-Source Data with A Self-Supervised Learning Enhanced Spatio-Temporal Inverted Transformer
PARCO: Phoneme-Augmented Robust Contextual ASR via Contrastive Entity Disambiguation
AUDETER: A Large-scale Dataset for Deepfake Audio Detection in Open Worlds
From Editor to Dense Geometry Estimator
Decoupled Entity Representation Learning for Pinterest Ads Ranking
Facts Fade Fast: Evaluating Memorization of Outdated Medical Knowledge in Large Language Models
HumAIne-Chatbot: Real-Time Personalized Conversational AI via Reinforcement Learning
Reinforcement Learning for Robust Ageing-Aware Control of Li-ion Battery Systems with Data-Driven Formal Verification
An Empirical Study of Vulnerabilities in Python Packages and Their Detection
How many patients could we save with LLM priors?
Learning Active Perception via Self-Evolving Preference Optimization for GUI Grounding
MAGneT: Coordinated Multi-Agent Generation of Synthetic Multi-Turn Mental Health Counseling Sessions
VisioFirm: Cross-Platform AI-assisted Annotation Tool for Computer Vision
Crossing the Species Divide: Transfer Learning from Speech to Animal Sounds
YOLO Ensemble for UAV-based Multispectral Defect Detection in Wind Turbine Components
Attention as an Adaptive Filter
TAGAL: Tabular Data Generation using Agentic LLM メソッド
Enhancing Technical Documents Retrieval for RAG
Load more
Exploring Response Uncertainty in MLLMs: An Empirical Evaluation under Misleading Scenarios
Created by
Haebom
作者
Yunkai Dang, Mengxi Gao, Yibo Yan, Xin Zou, Yanggan Gu, Jungang Li, Jingyu Wang, Peijie Jiang, Aiwei Liu, Jia Liu, Xuming Hu
概要
本論文は、マルチモーダル大規模言語モデル(MLLM)の誤りの脆弱性、特に誤った情報に対する応答不確実性の現象を探る。研究者は、9つの標準データセットと12の最先端のオープンソースMLLMを使用して、単一の誤解を招く手がかりが与えられたとき、以前に正解だった回答を覆す割合が65%に達することを明らかにした。これを定量的に分析するために、2段階評価パイプライン(元の応答確認と誤解を招く指示語注入後の誤り率測定)を提示し、誤り率の高い例をまとめて多モード不確実性ベンチマーク(MUB)を作製した。 12のオープンソースモデルと5つのクローズドソースモデルの広範な評価の結果、平均誤差率は86%を超え、明示的な手がかりの場合は67.19%、暗黙的な手がかりの場合は80.67%を超えました。最後に、オープンソースMLLMを2000のサンプルで構成された混合ディレクティブデータセットで微調整し、エラー率を大幅に減少させました(明示的な手がかりの場合は6.97%、暗黙の手がかりの場合は32.77%)。
Takeaways、Limitations
•
Takeaways:
◦
MLLMの誤りの脆弱性と誤った情報に対する反応の不確実性を体系的に調べた。
◦
MLLMの信頼性を向上させるための新しいベンチマーク(MUB)を提案しました。
◦
微調整はMLLMの誤り率を大幅に低減できることを示した。
◦
さまざまな種類の誤解を招く情報に対するMLLMの脆弱性を分析し、それを軽減する方法を提示することによって、実際のアプリケーションにおけるMLLMの安全性と信頼性の向上に貢献することができます。
•
Limitations:
◦
現在、ベンチマークと微調整データセットは特定の種類のエラーに集中しており、他の種類のエラーの一般化の可能性は限られている可能性があります。
◦
微調整後も暗黙的な手がかりに対する誤り率は依然としてかなり高い。
◦
使用されたモデルはオープンソースに限定されており、商用モデルの一般化の可能性にはさらなる研究が必要です。
◦
誤り率を減らすための微調整プロセスで使用されるデータセットのサイズが比較的小さいことを限界として指摘することができる。
PDFを見る
Made with Slashpage