/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
Dehazing Light Microscopy Images with Guided Conditional Flow Matching: finding a sweet spot between fidelity and realism
EFRame: Deeper Reasoning via Exploration-Filter-Replay Reinforcement Learning Framework
Refine-POI: Reinforcement Fine-Tuned Large Language Models for Next Point-of-Interest Recommendation
HalluSegBench: Counterfactual Visual Reasoning for Segmentation Hallucination Evaluation
Potemkin Understanding in Large Language Models
OmniEval: A Benchmark for Evaluating Omni-modal Models with Visual, Auditory, and Textual Inputs
How to Retrieve Examples in In-context Learning to Improve Conversational Emotion Recognition using Large Language Models?
Position: Machine Learning Conferences Should Establish a "Refutations and Critiques" Track
Arabic Dialect Classification using RNNs, Transformers, and Large Language Models: A Comparative Analysis
Improving Student-AI Interaction Through Pedagogical Prompting: An Example in Computer Science Education
GLIMPSE: Gradient-Layer Importance Mapping for Prompted Visual Saliency Explanation for Generative LVLMs
Automatic Depression Assessment using Machine Learning: A Comprehensive Survey
Generalizing vision-language models to novel domains: A comprehensive survey
Comparative Evaluation of ChatGPT and DeepSeek Across Key NLP Tasks: Strengths, Weaknesses, and Domain-Specific Performance
AI-Generated Song Detection via Lyrics Transcripts
KAG-Thinker: Interactive Thinking and Deep Reasoning in LLMs via Knowledge-Augmented Generation
Data Quality Issues in Multilingual Speech Datasets: The Need for Sociolinguistic Awareness and Proactive Language Planning
Double Entendre: Robust Audio-Based AI-Generated Lyrics Detection via Multi-View Fusion
Aligning Evaluation with Clinical Priorities: Calibration, Label Shift, and Error Costs
Value-Free Policy Optimization via Reward Partitioning
VFEFL: Privacy-Preserving Federated Learning against Malicious Clients via Verifiable Functional Encryption
Enabling Precise Topic Alignment in Large Language Models Via Sparse Autoencoders
Robust LLM Unlearning with MUDMAN: Meta-Unlearning with Disruption Masking And Normalization
CMI-Bench: A Comprehensive Benchmark for Evaluating Music Instruction Following
StepProof: Step-by-step verification of natural language mathematical proofs
Scalable Non-Equivariant 3D Molecule Generation via Rotational Alignment
Improved Supervised Fine-Tuning for Large Language Models to Mitigate Catastrophic Forgetting
SLED: A Speculative LLM Decoding Framework for Efficient Edge Serving
FZOO: Fast Zeroth-Order Optimizer for Fine-Tuning Large Language Models towards Adam-Scale Speed
VeriLoC: Line-of-Code Level Prediction of Hardware Design Quality from Verilog Code
Multi Layered Autonomy and AI Ecologies in Robotic Art Installations
Bridging Subjective and Objective QoE: Operator-Level Aggregation Using LLM-Based Comment Analysis and Network MOS Comparison
Quantum computing and artificial intelligence: status and perspectives
Fine-Tuning Next-Scale Visual Autoregressive Models with Group Relative Policy Optimization
A Large Language Model-Enabled Control Architecture for Dynamic Resource Capability Exploration in Multi-Agent Manufacturing Systems
Spotlight-TTS: Spotlighting the Style via Voiced-Aware Style Extraction and Style Direction Adjustment for Expressive Text-to-Speech
WeatherEdit: Controllable Weather Editing with 4D Gaussian Field
From Alignment to Advancement: Bootstrapping Audio-Language Alignment with Synthetic Data
Error Optimization: Overcoming Exponential Signal Decay in Deep Predictive Coding Networks
TinyAlign: Boosting Lightweight Vision-Language Models by Mitigating Modal Alignment Bottlenecks
Super-Resolution Generative Adversarial Networks based Video Enhancement
Object detection in adverse weather conditions for autonomous vehicles using Instruct Pix2Pix
INSIGHT: Bridging the Student-Teacher Gap in Times of Large Language Models
SConU: Selective Conformal Uncertainty in Large Language Models
MetaSynth: Meta-Prompting-Driven Agentic Scaffolds for Diverse Synthetic Data Generation
Sculpting Memory: Multi-Concept Forgetting in Diffusion Models via Dynamic Mask and Concept-Aware Optimization
Achieving binary weight and activation for LLMs using Post-Training Quantization
A Consequentialist Critique of Binary Classification Evaluation Practices
Redefining Evaluation Standards: A Unified Framework for Evaluating the Korean Capabilities of Language Models
Test-Time Reasoning Through Visual Human Preferences with VLMs and Soft Rewards
FedMM-X: A Trustworthy and Interpretable Framework for Federated Multi-Modal Learning in Dynamic Environments
Automating Adjudication of Cardiovascular Events Using Large Language Models
ATTENTION2D: Communication Efficient Distributed Self-Attention Mechanism
Visual Position Prompt for MLLM ベースの Visual Grounding
Time-R1: Post-Training Large Vision Language Model for Temporal Video Grounding
Privacy Ethics Alignment in AI: A Stakeholder-Centric Framework for Ethical AI
Characterizing GPU Resilience and Impact on AI/HPC Systems
Explainable Sentiment Analysis with DeepSeek-R1: Performance, Efficiency, and Few-Shot Learning
Neurons: Emulating the Human Visual Cortex Improves Fidelity and Interpretability in fMRI-to-Video Reconstruction
The Problem of the Priors, or Posteriors?
Gumiho: A Hybrid Architecture to Prioritize Early Tokens in Speculative Decoding
Disrupting Model Merging: A Parameter-Level Defense Without Sacrificing Accuracy
What can large language models do for sustainable food?
Enough Coin Flips Can Make LLMs Act Bayesian
How to Move Your Dragon: Text-to-Motion Synthesis for Large-Vocabulary Objects
Time-MQA: Time Series Multi-Task Question Answering with Context Enhancement
PipeOffload: Improving Scalability of Pipeline Parallelism with Memory Optimization
Space-Time Graphs of Convex Sets for Multi-Robot Motion Planning
HalCECE: A Framework for Explainable Hallucination Detection through Conceptual Counterfactuals in Image Captioning
LNUCB-TA: Linear-nonlinear Hybrid Bandit Learning with Temporal Attention
No, of course I can! Refusal Mechanisms Can Be Exploited Using Harmless Fine-Tuning Data
Investigating the Impact of Quantization Methods on the Safety and Reliability of Large Language Models
Retrieval Augmented Generation Based LLM Evaluation For Protocol State Machine Inference With Chain-of-Thought Reasoning
A general language model for peptide identification
Cluster and Predict Latent Patches for Improved Masked Image Modeling
Semantic-Aware Adaptive Video Streaming Using Latent Diffusion Models for Wireless Networks
KMI: A Dataset of Korean Motivational Interviewing Dialogues for Psychotherapy
Mechanistic Interpretability of Emotion Inference in Large Language Models
Multimodal Medical Code Tokenizer
Time to Rethink AI for Combinatorial Optimization: Classical Algorithms Remain Tough to Match
Simultaneous Multi-Robot Motion Planning with Projected Diffusion Models
Environment-Driven Online LiDAR-Camera Extrinsic Calibration
Riddle Me This! Stealthy Membership Inference for Retrieval-Augmented Generation
DReSS: Data-driven Regularized Structured Streamlining for Large Language Models
Towards Automated Self-Supervised Learning for Truly Unsupervised Graph Anomaly Detection
Adaptive Rank Allocation for Federated Parameter-Efficient Fine-Tuning of Language Models
DisCoPatch: Taming Adversarially-driven Batch Statistics for Improved Out-of-Distribution Detection
An Investigation into Seasonal Variations in Energy Forecasting for Student Residences
Efficiently Serving Large Multimodal Models Using EPD Disaggregation
PRMBench: A Fine-grained and Challenging Benchmark for Process-Level Reward Models
AlignGuard: Scalable Safety Alignment for Text-to-Image Generation
A Library for Learning Neural Operators
ZipAR: Parallel Auto-regressive Image Generation through Spatial Locality
Pretrained Reversible Generation as Unsupervised Visual Representation Learning
FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait
SEUF: Is Unlearning One Expert Enough for Mixture-of-Experts LLMs?
Recommender Systems for Good (RS4Good): Survey of Use Cases and a Call to Action for Research that Matters
Foundation Models for Wearable Movement Data in Mental Health Research
GenBFA: An Evolutionary Optimization Approach to Bit-Flip Attacks on LLMs
Enhancing Diffusion Posterior Sampling for Inverse Problems by Integrating Crafted Measurements
Load more
$C^3$-Bench: The Things Real Disturbing LLM based Agent in Multi-Tasking
Created by
Haebom
作者
Peijie Yu, Yifan Yang, Jinjian Li, Zelong Zhang, Haorui Wang, Xiao Feng, Feng Zhang
概要
この論文は、大規模な言語モデルベースのエージェントがツールを活用して環境を変更する方法で物理世界と対話する方法に革新をもたらしたことを背景としています。従来の自然言語処理とは異なり、これらのエージェントは、ツール間の関係、環境フィードバック、以前の決定など、より複雑な要素を考慮して意思決定を行う必要があります。従来の研究では、主に多重会話を通じてエージェントを評価していますが、これらの重要な要因がエージェントの行動に与える影響は見落としています。このギャップを解消するために、この論文はオープンソースの高品質ベンチマークである$ C ^ 3 $ -Benchを提示します。 $ C ^ 3 $ -Benchは攻撃の概念を統合し、単変量分析を適用して、エージェントの堅牢性に影響を与える重要な要素を正確に特定します。具体的には、複雑なツール関係探索、重要な隠された情報処理、動的意思決定経路管理という3つの課題を設計し、これらの課題を補うために細分化された指標、革新的なデータ収集アルゴリズム、再現可能な評価方法を導入します。 49の主要なエージェント(一般的な迅速な事故、遅い事故、および特定のドメインモデルを含む)を対象に広範な実験を行った結果、エージェントはツール依存性、長い文脈情報依存性、および頻繁なポリシータイプ移行処理にかなりの欠点があることを確認しました。基本的に、$ C ^ 3 $ -Benchはこれらの課題を通じてモデルの脆弱性を公開し、エージェントのパフォーマンスの解釈性に関する研究を促進することを目的としています。ベンチマークは
https://github.com/TencentHunyuan/C3-Benchmark
で公開されています。
Takeaways、Limitations
•
Takeaways:
◦
大規模言語モデルベースのエージェントの堅牢性と解析性を評価するための新しいベンチマーク($ C ^ 3 $ -Bench)を提供します。
◦
エージェントのツール依存性、長い文脈情報処理、政策転換能力などの脆弱性を明らかにすることで、今後の研究方向を提示する。
◦
オープンソースで公開され、他の研究者の再現性と追加研究可能。
◦
きめ細かい指標と革新的なデータ収集アルゴリズムにより、より洗練されたエージェント評価が可能になります。
•
Limitations:
◦
現在、ベンチマークに含まれる課題の種類と範囲が制限される可能性があります。
◦
単変量分析に基づく分析では、多変量分析によるより深い分析が必要です。
◦
評価対象エージェントの種類が特定分野に偏っている可能性。
◦
実際の世界適用時に発生する可能性のあるさまざまな状況や変数を完全に反映できない可能性。
PDFを見る
Made with Slashpage