[공지사항]을 빙자한 안부와 근황
Show more
/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
Efficient Federated Learning with Heterogeneous Data and Adaptive Dropout
Energy Efficiency in AI for 5G and Beyond: A DeepRx Case Study
A PBN-RL-XAI Framework for Discovering a "Hit-and-Run" Therapeutic Strategy in Melanoma
(Almost) Free Modality Stitching of Foundation Models
Prompt4Trust: A Reinforcement Learning Prompt Augmentation Framework for Clinically-Aligned Confidence Calibration in Multimodal Large Language Models
SEALGuard: Safeguarding the Multilingual Conversations in Southeast Asian Languages for LLM Software Systems
Dually Hierarchical Drift Adaptation for Online Configuration Performance Learning
Tree-Structured Parzen Estimator Can Solve Black-Box Combinatorial Optimization More Efficiently
EXPO: Stable Reinforcement Learning with Expressive Policies
Reinforcement Learning with Action Chunking
On the Effect of Instruction Tuning Loss on Generalization
Hallucination Stations: On Some Basic Limitations of Transformer-Based Language Models
Text to model via SysML: Automated generation of dynamical system computational models from unstructured natural language text via enhanced System Modeling Language diagrams
Feature-Based vs. GAN-Based Learning from Demonstrations: When and Why
DRAGON: Dynamic RAG Benchmark On News
Solar Flare Prediction Using Long Short-term Memory (LSTM) and Decomposition-LSTM with Sliding Window Pattern Recognition
Conversation Forests: The Key to Fine Tuning Large Language Models for Multi-Turn Medical Conversations is Branching
RAG-R1: Incentivize the Search and Reasoning Capabilities of LLMs through Multi-query Parallelism
Following the Clues: Experiments on Person Re-ID using Cross-Modal Intelligence
Stylometry recognizes human and LLM-generated texts in short samples
QLPro: Automated Code Vulnerability Discovery via LLM and Static Code Analysis Integration
Evaluating Multimodal Large Language Models on Educational Textbook Question Answering
FeDa4Fair: Client-Level Federated Datasets for Fairness Evaluation
Alleviating User-Sensitive bias with Fair Generative Sequential Recommendation Model
MATE: LLM-Powered Multi-Agent Translation Environment for Accessibility Applications
DeInfoReg: A Decoupled Learning Framework for Better Training Throughput
FLAME: Towards Federated Fine-Tuning Large Language Models Through Adaptive SMoE
ImpliRet: Benchmarking the Implicit Fact Retrieval Challenge
The Price of Freedom: Exploring Expressivity and Runtime Tradeoffs in Equivariant Tensor Products
The Limits of Tractable Marginalization
A quantum semantic framework for natural language processing
ProtocolLLM: RTL Benchmark for SystemVerilog Generation of Communication Protocols
Deepfake Technology Unveiled: The Commoditization of AI and Its Impact on Digital Trust
Training Dynamics Underlying Language Model Scaling Laws: Loss Deceleration and Zero-Sum Learning
Critique-GRPO: Advancing LLM Reasoning with Natural Language and Numerical Feedback
Matrix Is All You Need
Temporal Chunking Enhances Recognition of Implicit Sequential Patterns
Seven Security Challenges That Must be Solved in Cross-domain Multi-agent LLM Systems
PAN-Crafter: Learning Modality-Consistent Alignment for PAN-Sharpening
FlowAlign: Trajectory-Regularized, Inversion-Free Flow-based Image Editing
Traveling Across Languages: Benchmarking Cross-Lingual Consistency in Multimodal LLMs
FalseReject: A Resource for Improving Contextual Safety and Mitigating Over-Refusals in LLMs via Structured Reasoning
Multimodal Sentiment Analysis on CMU-MOSEI Dataset using Transformer-based Models
Nexus-Gen: Unified Image Understanding, Generation, and Editing via Prefilled Autoregression in Shared Embedding Space
Leveraging Large Language Models for Multi-Class and Multi-Label Detection of Drug Use and Overdose Symptoms on Social Media
Rethinking the Foundations for Continual Reinforcement Learning
Compositional Flows for 3D Molecule and Synthesis Pathway Co-design
Rethinking RoPE: A Mathematical Blueprint for N-dimensional Positional Embedding
Speculative Automated Refactoring of Imperative Deep Learning Programs to Graph Execution
Test-time Adaptation for Foundation Medical Segmentation Model without Parametric Updates
Style over Substance: Distilled Language Models Reason Via Stylistic Replication
AnnoPage Dataset: Dataset of Non-Textual Elements in Documents with Fine-Grained Categorization
Multi-View Node Pruning for Accurate Graph Representation
Hi Robot: Open-Ended Instruction Following with Hierarchical Vision-Language-Action Models
Voting or Consensus? Decision-Making in Multi-Agent Debate
Assistance or Disruption? Exploring and Evaluating the Design and Trade-offs of Proactive AI Programming Support
A Generative Approach to LLM Harmfulness Detection with Special Red Flag Tokens
Score-of-Mixture Training: Training One-Step Generative Models Made Simple via Score Estimation of Mixture Distributions
Model Tampering Attacks Enable More Rigorous Evaluations of LLM Capabilities
Synthetic Datasets for Machine Learning on Spatio-Temporal Graphs using PDEs
Comply: Learning Sentences with Complex Weights inspired by Fruit Fly Olfaction
Inverse Reinforcement Learning with Switching Rewards and History Dependency for Characterizing Animal Behaviors
Few-Shot Radar Signal Recognition through Self-Supervised Learning and Radio Frequency Domain Adaptation
Transfer Learning Analysis of Variational Quantum Circuits
Plancraft: an evaluation dataset for planning with LLM agents
Fully Data-driven but Interpretable Human Behavioural Modelling with Differentiable Discrete Choice Model
A Review of Bayesian Uncertainty Quantification in Deep Probabilistic Image Segmentation
Is Training Data Quality or Quantity More Impactful to Small Language Model Performance?
Searching Latent Program Spaces
The Pragmatic Frames of Spurious Correlations in Machine Learning: Interpreting How and Why They Matter
ComFairGNN: Community Fair Graph Neural Network
DroidSpeak: KV Cache Sharing for Cross-LLM Communication and Multi-LLM Serving
Online Intrinsic Rewards for Decision Making Agents from Large Language Model Feedback
Large Language Models Engineer Too Many Simple Features For Tabular Data
Overcoming Slow Decision Frequencies in Continuous Control: Model-Based Sequence Reinforcement Learning for Model-Free Control
IdeaSynth: Iterative Research Idea Development Through Evolving and Composing Idea Facets with Literature-Grounded Feedback
SECURE: Semantics-aware Embodied Conversation under Unawareness for Lifelong Robot Learning
Advancing Depth Anything Model for Unsupervised Monocular Depth Estimation in Endoscopy
SA-GDA: Spectral Augmentation for Graph Domain Adaptation
The GPT Surprise: Offering Large Language Model Chat in a Massive Coding Class Reduced Engagement but Increased Adopters Exam Performances
State-Constrained Offline Reinforcement Learning
SimAD: A Simple Dissimilarity-based Approach for Time Series Anomaly Detection
Unified ODE Analysis of Smooth Q-Learning Algorithms
FairTargetSim: An Interactive Simulator for Understanding and Explaining the Fairness Effects of Target Variable Definition
Fine-grained Stateful Knowledge Exploration: Effective and Efficient Graph Retrieval with Large Language Models
Learning Safe Numeric Planning Action Models
Augmenting End-to-End Steering Angle Prediction with CAN Bus Data
EASTER: Embedding Aggregation-based Heterogeneous Models Training in Vertical Federated Learning
GRAPES: Learning to Sample Graphs for Scalable Graph Neural Networks
Acquiring and Adapting Priors for Novel Tasks via Neural Meta-Architectures
VerifyBench: A Systematic Benchmark for Evaluating Reasoning Verifiers Across Domains
Is Human-Written Data Enough? The Challenge of Teaching Reasoning to LLMs Without RL or Distillation
Working with AI: Measuring the Occupational Implications of Generative AI
Establishing Best Practices for Building Rigorous Agentic Benchmarks
An Agentic Framework for Autonomous Metamaterial Modeling and Inverse Design
Seeking to Collide: Online Safety-Critical Scenario Generation for Autonomous Driving with Retrieval Augmented Large Language Models
BOOST: Bootstrapping Strategy-Driven Reasoning Programs for Program-Guided Fact-Checking
The Odyssey of the Fittest: Can Agents Survive and Still Be Good?
Agentic Reasoning: A Streamlined Framework for Enhancing LLM Reasoning with Agentic Tools
ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning
Load more
Seeing Sound, Hearing Sight: Uncovering Modality Bias and Conflict of AI models in Sound Localization
Created by
Haebom
作者
Yanhao Jia, Ji Xie, S Jivaganesh, Hao Li, Xu Wu, Mengmi Zhang
概要
本論文は、視聴覚情報が矛盾する状況で人間と人工知能の音位置認識能力を比較分析した研究である。人間は視覚情報が間違っていても聴覚情報を優先して音の位置を正確に把握する一方、最先端のマルチモーダルAIモデルは視覚情報に依存する傾向が強く、視覚情報が矛盾したり不在した場合性能が大きく低下することを示した。研究者らは、3Dシミュレーションで生成した立体音響画像データセットを用いて最先端モデルを微調整し、限られた訓練データにもかかわらず既存のベンチマークを凌駕する性能を達成した。特に、人間と同様に左右方向の位置認識に偏りがあり、これは立体音響構造が人間の耳の位置を反映しているためと推測される。この研究は、感覚入力の質とシステムアーキテクチャがマルチモーダル表現の精度に与える影響を強調しています。
Takeaways、Limitations
•
Takeaways:
◦
人間の感覚情報処理方式とAIの違いを明確に示すことで、より人間的なマルチモーダルAI開発の必要性を提起する。
◦
3Dシミュレーションデータを活用した微調整技術がAIの音位置認識性能向上に有効であることを実証する。
◦
AIモデルのモーダル偏向を解決するための新しい研究方向を提示します。
◦
人間の感覚情報処理機構の理解を深める
•
Limitations:
◦
使用されるデータセットの制限により、実際の環境での一般化性能には追加の検証が必要です。
◦
現在のモデルは特定の種類の音の位置認識に焦点を当てており、さまざまな音の種類と環境の一般化能力にはさらなる研究が必要です。
◦
人間の音の位置認識能力の完全な理解に基づいていないが、人間との比較を通じてAIの限界を明らかにしたという点で、人間の認知過程のさらなる研究が必要である。
PDFを見る
Made with Slashpage