/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
Dehazing Light Microscopy Images with Guided Conditional Flow Matching: finding a sweet spot between fidelity and realism
EFRame: Deeper Reasoning via Exploration-Filter-Replay Reinforcement Learning Framework
Refine-POI: Reinforcement Fine-Tuned Large Language Models for Next Point-of-Interest Recommendation
HalluSegBench: Counterfactual Visual Reasoning for Segmentation Hallucination Evaluation
Potemkin Understanding in Large Language Models
OmniEval: A Benchmark for Evaluating Omni-modal Models with Visual, Auditory, and Textual Inputs
How to Retrieve Examples in In-context Learning to Improve Conversational Emotion Recognition using Large Language Models?
Position: Machine Learning Conferences Should Establish a "Refutations and Critiques" Track
Arabic Dialect Classification using RNNs, Transformers, and Large Language Models: A Comparative Analysis
Improving Student-AI Interaction Through Pedagogical Prompting: An Example in Computer Science Education
GLIMPSE: Gradient-Layer Importance Mapping for Prompted Visual Saliency Explanation for Generative LVLMs
Automatic Depression Assessment using Machine Learning: A Comprehensive Survey
Generalizing vision-language models to novel domains: A comprehensive survey
Comparative Evaluation of ChatGPT and DeepSeek Across Key NLP Tasks: Strengths, Weaknesses, and Domain-Specific Performance
AI-Generated Song Detection via Lyrics Transcripts
KAG-Thinker: Interactive Thinking and Deep Reasoning in LLMs via Knowledge-Augmented Generation
Data Quality Issues in Multilingual Speech Datasets: The Need for Sociolinguistic Awareness and Proactive Language Planning
Double Entendre: Robust Audio-Based AI-Generated Lyrics Detection via Multi-View Fusion
Aligning Evaluation with Clinical Priorities: Calibration, Label Shift, and Error Costs
Value-Free Policy Optimization via Reward Partitioning
VFEFL: Privacy-Preserving Federated Learning against Malicious Clients via Verifiable Functional Encryption
Enabling Precise Topic Alignment in Large Language Models Via Sparse Autoencoders
Robust LLM Unlearning with MUDMAN: Meta-Unlearning with Disruption Masking And Normalization
CMI-Bench: A Comprehensive Benchmark for Evaluating Music Instruction Following
StepProof: Step-by-step verification of natural language mathematical proofs
Scalable Non-Equivariant 3D Molecule Generation via Rotational Alignment
Improved Supervised Fine-Tuning for Large Language Models to Mitigate Catastrophic Forgetting
SLED: A Speculative LLM Decoding Framework for Efficient Edge Serving
FZOO: Fast Zeroth-Order Optimizer for Fine-Tuning Large Language Models towards Adam-Scale Speed
VeriLoC: Line-of-Code Level Prediction of Hardware Design Quality from Verilog Code
Multi Layered Autonomy and AI Ecologies in Robotic Art Installations
Bridging Subjective and Objective QoE: Operator-Level Aggregation Using LLM-Based Comment Analysis and Network MOS Comparison
Quantum computing and artificial intelligence: status and perspectives
Fine-Tuning Next-Scale Visual Autoregressive Models with Group Relative Policy Optimization
A Large Language Model-Enabled Control Architecture for Dynamic Resource Capability Exploration in Multi-Agent Manufacturing Systems
Spotlight-TTS: Spotlighting the Style via Voiced-Aware Style Extraction and Style Direction Adjustment for Expressive Text-to-Speech
WeatherEdit: Controllable Weather Editing with 4D Gaussian Field
From Alignment to Advancement: Bootstrapping Audio-Language Alignment with Synthetic Data
Error Optimization: Overcoming Exponential Signal Decay in Deep Predictive Coding Networks
TinyAlign: Boosting Lightweight Vision-Language Models by Mitigating Modal Alignment Bottlenecks
Super-Resolution Generative Adversarial Networks based Video Enhancement
Object detection in adverse weather conditions for autonomous vehicles using Instruct Pix2Pix
INSIGHT: Bridging the Student-Teacher Gap in Times of Large Language Models
SConU: Selective Conformal Uncertainty in Large Language Models
MetaSynth: Meta-Prompting-Driven Agentic Scaffolds for Diverse Synthetic Data Generation
Sculpting Memory: Multi-Concept Forgetting in Diffusion Models via Dynamic Mask and Concept-Aware Optimization
Achieving binary weight and activation for LLMs using Post-Training Quantization
A Consequentialist Critique of Binary Classification Evaluation Practices
Redefining Evaluation Standards: A Unified Framework for Evaluating the Korean Capabilities of Language Models
Test-Time Reasoning Through Visual Human Preferences with VLMs and Soft Rewards
FedMM-X: A Trustworthy and Interpretable Framework for Federated Multi-Modal Learning in Dynamic Environments
Automating Adjudication of Cardiovascular Events Using Large Language Models
ATTENTION2D: Communication Efficient Distributed Self-Attention Mechanism
Visual Position Prompt for MLLM ベースの Visual Grounding
Time-R1: Post-Training Large Vision Language Model for Temporal Video Grounding
Privacy Ethics Alignment in AI: A Stakeholder-Centric Framework for Ethical AI
Characterizing GPU Resilience and Impact on AI/HPC Systems
Explainable Sentiment Analysis with DeepSeek-R1: Performance, Efficiency, and Few-Shot Learning
Neurons: Emulating the Human Visual Cortex Improves Fidelity and Interpretability in fMRI-to-Video Reconstruction
The Problem of the Priors, or Posteriors?
Gumiho: A Hybrid Architecture to Prioritize Early Tokens in Speculative Decoding
Disrupting Model Merging: A Parameter-Level Defense Without Sacrificing Accuracy
What can large language models do for sustainable food?
Enough Coin Flips Can Make LLMs Act Bayesian
How to Move Your Dragon: Text-to-Motion Synthesis for Large-Vocabulary Objects
Time-MQA: Time Series Multi-Task Question Answering with Context Enhancement
PipeOffload: Improving Scalability of Pipeline Parallelism with Memory Optimization
Space-Time Graphs of Convex Sets for Multi-Robot Motion Planning
HalCECE: A Framework for Explainable Hallucination Detection through Conceptual Counterfactuals in Image Captioning
LNUCB-TA: Linear-nonlinear Hybrid Bandit Learning with Temporal Attention
No, of course I can! Refusal Mechanisms Can Be Exploited Using Harmless Fine-Tuning Data
Investigating the Impact of Quantization Methods on the Safety and Reliability of Large Language Models
Retrieval Augmented Generation Based LLM Evaluation For Protocol State Machine Inference With Chain-of-Thought Reasoning
A general language model for peptide identification
Cluster and Predict Latent Patches for Improved Masked Image Modeling
Semantic-Aware Adaptive Video Streaming Using Latent Diffusion Models for Wireless Networks
KMI: A Dataset of Korean Motivational Interviewing Dialogues for Psychotherapy
Mechanistic Interpretability of Emotion Inference in Large Language Models
Multimodal Medical Code Tokenizer
Time to Rethink AI for Combinatorial Optimization: Classical Algorithms Remain Tough to Match
Simultaneous Multi-Robot Motion Planning with Projected Diffusion Models
Environment-Driven Online LiDAR-Camera Extrinsic Calibration
Riddle Me This! Stealthy Membership Inference for Retrieval-Augmented Generation
DReSS: Data-driven Regularized Structured Streamlining for Large Language Models
Towards Automated Self-Supervised Learning for Truly Unsupervised Graph Anomaly Detection
Adaptive Rank Allocation for Federated Parameter-Efficient Fine-Tuning of Language Models
DisCoPatch: Taming Adversarially-driven Batch Statistics for Improved Out-of-Distribution Detection
An Investigation into Seasonal Variations in Energy Forecasting for Student Residences
Efficiently Serving Large Multimodal Models Using EPD Disaggregation
PRMBench: A Fine-grained and Challenging Benchmark for Process-Level Reward Models
AlignGuard: Scalable Safety Alignment for Text-to-Image Generation
A Library for Learning Neural Operators
ZipAR: Parallel Auto-regressive Image Generation through Spatial Locality
Pretrained Reversible Generation as Unsupervised Visual Representation Learning
FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait
SEUF: Is Unlearning One Expert Enough for Mixture-of-Experts LLMs?
Recommender Systems for Good (RS4Good): Survey of Use Cases and a Call to Action for Research that Matters
Foundation Models for Wearable Movement Data in Mental Health Research
GenBFA: An Evolutionary Optimization Approach to Bit-Flip Attacks on LLMs
Enhancing Diffusion Posterior Sampling for Inverse Problems by Integrating Crafted Measurements
Load more
Towards Adaptive Memory-Based Optimization for Enhanced Retrieval-Augmented Generation
Created by
Haebom
作者
Qitao Qin, Yucong Luo, Yihang Lu, Zhibo Chu, Xianwei Meng
概要
Retrieval-Augmented Generation (RAG) は、外部知識ベースの非パラメトリック知識をモデルに組み込むことで、応答の精度を高め、実際に誤差や幻覚を軽減する有望な方法として浮上しました.しかし、従来のRAGメソッドは独立した検索操作を実行し、サマリーメモリを維持したり、適応検索戦略を使用せずに検索された情報を生成に直接統合するため、冗長情報によるノイズと情報統合の欠如のためにオープンドメインQA操作で困難になります。この論文では、これらの問題を解決するために、オープンドメインQA操作のためのAdaptive memory-based optimization for enhanced RAG(Amber)を提案します。 Amberは、エージェントベースのメモリアップデータ、アダプティブ情報コレクタ、マルチパーティクルコンテンツフィルタで構成され、繰り返しメモリアップデートパラダイム内で連携します。マルチエージェントコラボレーションアプローチにより、言語モデルのメモリを統合および最適化し、前の検索ステップの包括的な知識統合を保証します。蓄積された知識に基づいて検索クエリを動的に調整し、検索を停止するタイミングを決定し、検索効率と効果を高めます。また、さまざまなレベルで無関係なコンテンツをフィルタリングしてノイズを減らし、必要な情報を維持し、全体的なモデルパフォーマンスを向上させます。複数のオープンドメインQAデータセットに対して広範な実験を行いました。
Takeaways、Limitations
•
Takeaways:
◦
オープン・ドメインのQA操作において、既存のRAG方式のLimitationsである冗長情報と情報統合の欠如の問題を効果的に解決する新しい方法(Amber)を提示します。
◦
エージェントベースのメモリアップデータ、アダプティブ情報コレクタ、マルチパーティクルコンテンツフィルタを使用して、検索効率と精度を向上させます。
◦
マルチエージェントコラボレーションアプローチにより、包括的な知識統合を可能にします。
◦
さまざまなオープンドメインQAデータセットで優れたパフォーマンスを実証します。
•
Limitations:
◦
提示された方法の一般化性能と様々なドメインへの適用性に関するさらなる研究が必要である。
◦
メモリ管理とエージェント間の相互作用の詳細な分析が不足している可能性があります。
◦
実験に使用したデータセットの特性によっては、パフォーマンスが影響を受ける可能性があります。
◦
計算コストとメモリ使用量の分析がさらに必要です。
PDFを見る
Made with Slashpage