/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
Merge-of-Thought Distillation
OTESGN: Optimal Transport-Enhanced Syntactic-Semantic Graph Networks for Aspect-Based Sentiment Analysis
MESH - Understanding Videos Like Human: Measuring Hallucinations in Large Video Models
Adapting Vision-Language Models for Neutrino Event Classification in High-Energy Physics
Symmetry-Guided Multi-Agent Inverse Reinforcement Learning
AU-Harness: An Open-Source Toolkit for Holistic Evaluation of Audio LLMs
Expert-Guided Explainable Few-Shot Learning for Medical Image Diagnosis
Towards Generalized Routing: Model and Agent Orchestration for Adaptive and Efficient Inference
MachineLearningLM: Scaling Many-shot In-context Learning via Continued Pretraining
Demo: Healthcare Agent Orchestrator (HAO) for Patient Summarization in Molecular Tumor Boards
Focusing by Contrastive Attention: Enhancing VLMs' Visual Reasoning
Beyond the Pre-Service Horizon: Infusing In-Service Behavior for Improved Financial Risk Forecasting
On Synthesis of Timed Regular Expressions
TinyDef-DETR: A DETR-based Framework for Defect Detection in Transmission Lines from UAV Imagery
LiDAR-BIND-T: Improved and Temporally Consistent Sensor Modality Translation and Fusion for Robotic Applications
From Vision to Validation: A Theory- and Data-Driven Construction of a GCC-Specific AI Adoption Index
A Comprehensive Guide to Differential Privacy: From Theory to User Expectations
The Architecture of AI Transformation: Four Strategic Patterns and an Emerging Frontier
FLM-Audio: Natural Monologues Improves Native Full-Duplex Chatbots via Dual Training
Deep Learning-Based Rock Particulate Classification Using Attention-Enhanced ConvNeXt
The Information Dynamics of Generative Diffusion
Data-Augmented Few-Shot Neural Stencil Emulation for System Identification of Computer Models
Group Expectation Policy Optimization for Heterogeneous Reinforcement Learning
Pretrained Conformers for Audio Fingerprinting and Retrieval
Towards Scalable Training for Handwritten Mathematical Expression Recognition
To Theoretically Understand Transformer-Based In-Context Learning for Optimizing CSMA
Klear-CodeTest: Scalable Test Case Generation for Code Reinforcement Learning
HiD-VAE: Interpretable Generative Recommendation via Hierarchical and Disentangled Semantic IDs
MagicGUI: A Foundational Mobile GUI Agent with Scalable Data Pipeline and Reinforcement Fine-tuning
Villa-X: Enhancing Latent Action Modeling in Vision-Language-Action Models
New Kid in the Classroom: Exploring Student Perceptions of AI Coding Assistants
Can Large Language Models Understand As Well As Apply Patent Regulations to Pass a Hands-On Patent Attorney Test?
Uncertainty-aware Diffusion and Reinforcement Learning for Joint Plane Localization and Anomaly Diagnosis in 3D Ultrasound
Uncertainty Estimation by Human Perception versus Neural Models
Persistent Homology of Topic Networks for the Prediction of Reader Curiosity
Task Matters: Knowledge Requirements Shape LLM Responses to Context-Memory Conflict
Crack Path Prediction with Operator Learning using Discrete Particle System data Generation
Diffusion Graph Neural Networks for Robustness in Olfaction Sensors and Datasets
MM-Prompt: Cross-Modal Prompt Tuning for Continual Visual Question Answering
An Ontology-Driven Graph RAG for Legal Norms: A Structural, Temporal, and Deterministic Approach
Combating Falsification of Speech Videos with Live Optical Signatures (Extended Version)
Early Exit and Multi Stage Knowledge Distillation in VLMs for Video Summarization
Critical Challenges and Guidelines in Evaluating Synthetic Tabular Data: A Systematic Review
Parasite: A Steganography-based Backdoor Attack Framework for Diffusion Models
Towards Adaptive Memory-Based Optimization for Enhanced Retrieval-Augmented Generation
Entropy-Gated Branching for Efficient Test-Time Reasoning
SWI: Speaking with Intent in Large Language Models
Byzantine-Robust Federated Learning Using Generative Adversarial Networks
VeriSafe Agent: Safeguarding Mobile GUI Agent via Logic-based Action Verification
MIND: Towards Immersive Psychological Healing with Multi-agent Inner Dialogue
V-HOP: Visuo-Haptic 6D Object Pose Tracking
EgoAgent: A Joint Predictive Agent Model in Egocentric Worlds
Knowledge-Guided Biomarker Identification for Label-Free Single-Cell RNA-Seq Data: A Reinforcement Learning Perspective
MERaLiON-SpeechEncoder: Towards a Speech Foundation Model for Singapore and Beyond
RED: Unleashing Token-Level Rewards from Holistic Feedback via Reward Redistribution
IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves
DeepVoting: Learning and Fine-Tuning Voting Rules with Canonical Embeddings
Rethinking Disentanglement under Dependent Factors of Variation
Discovering physical laws with parallel symbolic enumeration
Semantic Augmentation in Images using Language
Algorithmic Collusion by Large Language Models
A minimal coalition logic
Deep Reinforcement Learning for Inventory Networks: Toward Reliable Policy Optimization
Inconsistency Handling in Prioritized Databases with Universal Constraints: Complexity Analysis and Links with Active Integrity Constraints
Directly Aligning the Full Diffusion Trajectory with Fine-Grained Human Preference
CogGuide: Human-Like Guidance for Zero-Shot Omni-Modal Reasoning
TreeGPT: Pure TreeFFN Encoder-Decoder Architecture for Structured Reasoning Without Attention Mechanisms
Robix: A Unified Model for Robot Interaction, Reasoning and Planning
KROMA: Ontology Matching with Knowledge Retrieval and Large Language Models
Scaling LLM Planning: NL2FLOW for Parametric Problem Generation and Rigorous Evaluation
Optimizing Length Compression in Large Reasoning Models
LLMs for sensory-motor control: Combining in-context and iterative learning
Effort-aware Fairness: Incorporating a Philosophy-informed, Human-centered Notion of Effort into Algorithmic Fairness Metrics
Simulating Human-like Daily Activities with Desire-driven Autonomy
Enhancing Few-Shot Transfer Learning with Optimized Multi-Task Prompt Tuning through Modular Prompt Composition
ButterflyQuant: Ultra-low-bit LLM Quantization through Learnable Orthogonal Butterfly Transforms
CDE: Curiosity-Driven Exploration for Efficient Reinforcement Learning in Large Language Models
SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning
Feasibility-Guided Fair Adaptive Offline Reinforcement Learning for Medicaid Care Management
Retrieval-Augmented Generation for Reliable Interpretation of Radio Regulations
Explaining Concept Drift through the Evolution of Group Counterfactuals
LoCoBench: A Benchmark for Long-Context Large Language Models in Complex Software Engineering
Mechanistic Learning with Guided Diffusion Models to Predict Spatio-Temporal Brain Tumor Growth
Graph Alignment via Dual-Pass Spectral Encoding and Latent Space Communication
ObjectReact: Learning Object-Relative Control for Visual Navigation
Fluent but Unfeeling: The Emotional Blind Spots of Language Models
Invisible Attributes, Visible Biases: Exploring Demographic Shortcuts in MRI-based Alzheimer's Disease Classification
An improved educational competition optimizer with multi-covariance learning operators for global optimization problems
Improving Video Diffusion Transformer Training by Multi-Feature Fusion and Alignment from Self-Supervised Vision Encoders
A modified RIME algorithm with covariance learning and diversity enhancement for numerical optimization
Towards Explainable Job Title Matching: Leveraging Semantic Textual Relatedness and Knowledge Graphs
Explainable AI for Accelerated Microstructure Imaging: A SHAP-Guided Protocol on the Connectome 2.0 scanner
Incorporating AI Incident Reporting into Telecommunications Law and Policy: Insights from India
OpenFake: An Open Dataset and Platform Toward Large-Scale Deepfake Detection
Prompt Pirates Need a Map: Stealing Seeds helps Stealing Prompts
Resource-Efficient Glioma Segmentation on Sub-Saharan MRI
ENSI: Efficient Non-Interactive Secure Inference for Large Language Models
We're Still Doing It (All) Wrong: Recommender Systems, Fifteen Years Later
LLMs Don't Know Their Own Decision Boundaries: The Unreliability of Self-Generated Counterfactual Explanations
MetaLLMix : An XAI Aided LLM-Meta-learning Based Approach for Hyper-parameters Optimization
Load more
LLMs Don't Know Their Own Decision Boundaries: The Unreliability of Self-Generated Counterfactual Explanations
Created by
Haebom
作者
Harry Mayne, Ryan Othniel Kearns, Yushi Yang, Andrew M. Bean, Eoin Delaney, Chris Russell, Adam Mahdi
概要
本論文は、大規模言語モデル(LLM)が独自に生成した反実証的説明(SCEs)を通じて意思決定プロセスを説明する能力を評価する。 SCEは、モデルが予測結果を変更するために入力を変更して説明する方法です。研究の結果、LLMは有効なSCEを生成しますが、最小限の修正では生成できず、これはモデルの意思決定プロセスに関する洞察をほとんど提供しないことを示しています。特に、最小限の修正でSCEを生成するように要求したときは、予測結果を変更できない過度に小さな修正をする傾向がある。いくつかのLLM、データセット、および評価設定では、妥当性と最小性の間の矛盾が観察されました。したがって、SCEは効果的な説明可能性ツールではなく、モデルの動作について誤解を招く可能性があると結論付けています。高リスクの状況でLLMを展開するには、信頼できない自己説明が後続の意思決定に与える影響を考慮する必要があります。
Takeaways、Limitations
•
Takeaways:
LLMの自己生成反実証的説明(SCEs)は、モデルの意思決定プロセスを説明するのに効果的ではなく、むしろ誤解を招く可能性があることを明らかにした。高リスクの状況でLLMを配布するときは、信頼できない自己説明の危険性を考慮する必要があります。
•
Limitations:
SCEの有効性と最小性の間の競合関係がすべてのLLM、データセット、および評価設定で一貫して現れるかどうかについてのさらなる研究が必要です。様々なタイプの記述可能性技術とSCEの比較分析が必要である。
PDFを見る
Made with Slashpage