[공지사항]을 빙자한 안부와 근황
Show more
/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
Efficient Federated Learning with Heterogeneous Data and Adaptive Dropout
Energy Efficiency in AI for 5G and Beyond: A DeepRx Case Study
A PBN-RL-XAI Framework for Discovering a "Hit-and-Run" Therapeutic Strategy in Melanoma
(Almost) Free Modality Stitching of Foundation Models
Prompt4Trust: A Reinforcement Learning Prompt Augmentation Framework for Clinically-Aligned Confidence Calibration in Multimodal Large Language Models
SEALGuard: Safeguarding the Multilingual Conversations in Southeast Asian Languages for LLM Software Systems
Dually Hierarchical Drift Adaptation for Online Configuration Performance Learning
Tree-Structured Parzen Estimator Can Solve Black-Box Combinatorial Optimization More Efficiently
EXPO: Stable Reinforcement Learning with Expressive Policies
Reinforcement Learning with Action Chunking
On the Effect of Instruction Tuning Loss on Generalization
Hallucination Stations: On Some Basic Limitations of Transformer-Based Language Models
Text to model via SysML: Automated generation of dynamical system computational models from unstructured natural language text via enhanced System Modeling Language diagrams
Feature-Based vs. GAN-Based Learning from Demonstrations: When and Why
DRAGON: Dynamic RAG Benchmark On News
Solar Flare Prediction Using Long Short-term Memory (LSTM) and Decomposition-LSTM with Sliding Window Pattern Recognition
Conversation Forests: The Key to Fine Tuning Large Language Models for Multi-Turn Medical Conversations is Branching
RAG-R1: Incentivize the Search and Reasoning Capabilities of LLMs through Multi-query Parallelism
Following the Clues: Experiments on Person Re-ID using Cross-Modal Intelligence
Stylometry recognizes human and LLM-generated texts in short samples
QLPro: Automated Code Vulnerability Discovery via LLM and Static Code Analysis Integration
Evaluating Multimodal Large Language Models on Educational Textbook Question Answering
FeDa4Fair: Client-Level Federated Datasets for Fairness Evaluation
Alleviating User-Sensitive bias with Fair Generative Sequential Recommendation Model
MATE: LLM-Powered Multi-Agent Translation Environment for Accessibility Applications
DeInfoReg: A Decoupled Learning Framework for Better Training Throughput
FLAME: Towards Federated Fine-Tuning Large Language Models Through Adaptive SMoE
ImpliRet: Benchmarking the Implicit Fact Retrieval Challenge
The Price of Freedom: Exploring Expressivity and Runtime Tradeoffs in Equivariant Tensor Products
The Limits of Tractable Marginalization
A quantum semantic framework for natural language processing
ProtocolLLM: RTL Benchmark for SystemVerilog Generation of Communication Protocols
Deepfake Technology Unveiled: The Commoditization of AI and Its Impact on Digital Trust
Training Dynamics Underlying Language Model Scaling Laws: Loss Deceleration and Zero-Sum Learning
Critique-GRPO: Advancing LLM Reasoning with Natural Language and Numerical Feedback
Matrix Is All You Need
Temporal Chunking Enhances Recognition of Implicit Sequential Patterns
Seven Security Challenges That Must be Solved in Cross-domain Multi-agent LLM Systems
PAN-Crafter: Learning Modality-Consistent Alignment for PAN-Sharpening
FlowAlign: Trajectory-Regularized, Inversion-Free Flow-based Image Editing
Traveling Across Languages: Benchmarking Cross-Lingual Consistency in Multimodal LLMs
FalseReject: A Resource for Improving Contextual Safety and Mitigating Over-Refusals in LLMs via Structured Reasoning
Multimodal Sentiment Analysis on CMU-MOSEI Dataset using Transformer-based Models
Nexus-Gen: Unified Image Understanding, Generation, and Editing via Prefilled Autoregression in Shared Embedding Space
Leveraging Large Language Models for Multi-Class and Multi-Label Detection of Drug Use and Overdose Symptoms on Social Media
Rethinking the Foundations for Continual Reinforcement Learning
Compositional Flows for 3D Molecule and Synthesis Pathway Co-design
Rethinking RoPE: A Mathematical Blueprint for N-dimensional Positional Embedding
Speculative Automated Refactoring of Imperative Deep Learning Programs to Graph Execution
Test-time Adaptation for Foundation Medical Segmentation Model without Parametric Updates
Style over Substance: Distilled Language Models Reason Via Stylistic Replication
AnnoPage Dataset: Dataset of Non-Textual Elements in Documents with Fine-Grained Categorization
Multi-View Node Pruning for Accurate Graph Representation
Hi Robot: Open-Ended Instruction Following with Hierarchical Vision-Language-Action Models
Voting or Consensus? Decision-Making in Multi-Agent Debate
Assistance or Disruption? Exploring and Evaluating the Design and Trade-offs of Proactive AI Programming Support
A Generative Approach to LLM Harmfulness Detection with Special Red Flag Tokens
Score-of-Mixture Training: Training One-Step Generative Models Made Simple via Score Estimation of Mixture Distributions
Model Tampering Attacks Enable More Rigorous Evaluations of LLM Capabilities
Synthetic Datasets for Machine Learning on Spatio-Temporal Graphs using PDEs
Comply: Learning Sentences with Complex Weights inspired by Fruit Fly Olfaction
Inverse Reinforcement Learning with Switching Rewards and History Dependency for Characterizing Animal Behaviors
Few-Shot Radar Signal Recognition through Self-Supervised Learning and Radio Frequency Domain Adaptation
Transfer Learning Analysis of Variational Quantum Circuits
Plancraft: an evaluation dataset for planning with LLM agents
Fully Data-driven but Interpretable Human Behavioural Modelling with Differentiable Discrete Choice Model
A Review of Bayesian Uncertainty Quantification in Deep Probabilistic Image Segmentation
Is Training Data Quality or Quantity More Impactful to Small Language Model Performance?
Searching Latent Program Spaces
The Pragmatic Frames of Spurious Correlations in Machine Learning: Interpreting How and Why They Matter
ComFairGNN: Community Fair Graph Neural Network
DroidSpeak: KV Cache Sharing for Cross-LLM Communication and Multi-LLM Serving
Online Intrinsic Rewards for Decision Making Agents from Large Language Model Feedback
Large Language Models Engineer Too Many Simple Features For Tabular Data
Overcoming Slow Decision Frequencies in Continuous Control: Model-Based Sequence Reinforcement Learning for Model-Free Control
IdeaSynth: Iterative Research Idea Development Through Evolving and Composing Idea Facets with Literature-Grounded Feedback
SECURE: Semantics-aware Embodied Conversation under Unawareness for Lifelong Robot Learning
Advancing Depth Anything Model for Unsupervised Monocular Depth Estimation in Endoscopy
SA-GDA: Spectral Augmentation for Graph Domain Adaptation
The GPT Surprise: Offering Large Language Model Chat in a Massive Coding Class Reduced Engagement but Increased Adopters Exam Performances
State-Constrained Offline Reinforcement Learning
SimAD: A Simple Dissimilarity-based Approach for Time Series Anomaly Detection
Unified ODE Analysis of Smooth Q-Learning Algorithms
FairTargetSim: An Interactive Simulator for Understanding and Explaining the Fairness Effects of Target Variable Definition
Fine-grained Stateful Knowledge Exploration: Effective and Efficient Graph Retrieval with Large Language Models
Learning Safe Numeric Planning Action Models
Augmenting End-to-End Steering Angle Prediction with CAN Bus Data
EASTER: Embedding Aggregation-based Heterogeneous Models Training in Vertical Federated Learning
GRAPES: Learning to Sample Graphs for Scalable Graph Neural Networks
Acquiring and Adapting Priors for Novel Tasks via Neural Meta-Architectures
VerifyBench: A Systematic Benchmark for Evaluating Reasoning Verifiers Across Domains
Is Human-Written Data Enough? The Challenge of Teaching Reasoning to LLMs Without RL or Distillation
Working with AI: Measuring the Occupational Implications of Generative AI
Establishing Best Practices for Building Rigorous Agentic Benchmarks
An Agentic Framework for Autonomous Metamaterial Modeling and Inverse Design
Seeking to Collide: Online Safety-Critical Scenario Generation for Autonomous Driving with Retrieval Augmented Large Language Models
BOOST: Bootstrapping Strategy-Driven Reasoning Programs for Program-Guided Fact-Checking
The Odyssey of the Fittest: Can Agents Survive and Still Be Good?
Agentic Reasoning: A Streamlined Framework for Enhancing LLM Reasoning with Agentic Tools
ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning
Load more
Inverse Reinforcement Learning with Switching Rewards and History Dependency for Characterizing Animal Behaviors
Created by
Haebom
作者
Jingyang Ke, Feiyang Wu, Jiyi Wang, Jeffrey Markowitz, Anqi Wu
概要
本論文は、既存の神経科学における意思決定研究が単純化された行動課題と明示的な補償に焦点を当て、動物の反復的かつ固定された行動だけを扱う限界を指摘します。自然環境では、動物はしばしば観察不可能な内的動機で長期間にわたって複雑な行動を示し、これを捕捉するために時間変化逆強化学習(IRL)が使用されていますが、動物の意思決定が現在の状態だけでなく過去の歴史にも基づいていることを考慮していません。この論文では、時間変化的かつ過去に依存する補償関数を組み込んだ新しいフレームワークであるSWIRL(SWitching IRL)を紹介します。 SWIRLは、長期的な行動シーケンスをそれぞれ固有の補償関数によって支配される短期意思決定プロセス間の移行としてモデル化し、過去の決定と環境的文脈が行動を形成する方法を捉えます。シミュレートされた実験と実際の動物行動データセットにSWIRLを適用することで、過去に依存しないモデルよりも定量的、定性的に優れたパフォーマンスが得られます。これは、過去に依存する政策と報酬を統合した最初のIRLモデルであり、動物の複雑で自然な意思決定の理解を促進します。
Takeaways、Limitations
•
Takeaways:
◦
時間的に変化しかつ過去に依存する補償関数を組み込んだ新しいIRLフレームワークであるSWIRLを提示し、動物の複雑で自然な意思決定をより正確にモデル化することができます。
◦
過去の行動と環境的文脈が現在の意思決定に与える影響を効果的に分析可能。
◦
既存のIRLモデルの限界を克服し、自然な行動データ分析に新たな可能性を提示。
◦
実際の動物行動データに適用することによるモデルの検証
•
Limitations:
◦
SWIRLモデルのパラメータ設定と最適化の詳細な説明が不足しています。
◦
様々な種類の動物行動データの一般化の可能性に関するさらなる研究の必要性
◦
モデルの計算の複雑さと拡張性のレビューが必要
◦
内的動機の正確な定義と測定の難しさ
PDFを見る
Made with Slashpage