/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
Cut2Next: Generating Next Shot via In-Context Tuning
DIVER: A Multi-Stage Approach for Reasoning-intensive Information Retrieval
Omni-Effects: Unified and Spatially-Controllable Visual Effects Generation
Chimera: Harnessing Multi-Agent LLMs for Automatic Insider Threat Simulation
Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization
TurboBias: Universal ASR Context-Biasing powered by GPU-accelerated Phrase-Boosting Tree
AMFT: Aligning LLM Reasoners by Meta-Learning the Optimal Imitation-Exploration Balance
LSDT: LLM-Augmented Semantic Digital Twins for Adaptive Knowledge-Intensive Infrastructure Planning
Do Biased Models Have Biased Thoughts?
Early Detection of Pancreatic Cancer Using Multimodal Learning on Electronic Health Record
LLM Unlearning Without an Expert Curated Dataset
Multi-Faceted Large Embedding Tables for Pinterest Ads Ranking
Echo: Decoupling Inference and Training for Large-Scale RL Alignment on Heterogeneous Swarms
Situated Epistemic Infrastructures: A Diagnostic Framework for Post-Coherence Knowledge
RCR-Router: Efficient Role-Aware Context Routing for Multi-Agent LLM Systems with Structured Memory
Position: The Current AI Conference Model is Unsustainable! Diagnosing the Crisis of Centralized AI Conference
GTPO and GRPO-S: Token and Sequence-Level Reward Shaping with Policy Entropy
A Few Words Can Distort Graphs: Knowledge Poisoning Attacks on Graph-based Retrieval-Augmented Generation of Large Language Models
Explaining Time Series Classifiers with PHAR: Rule Extraction and Fusion from Post-hoc Attributions
Role-Aware Language Models for Secure and Contextualized Access Control in Organizations
DynaSwarm: Dynamically Graph Structure Selection for LLM ベースのマルチエージェントシステム
Post-Completion Learning for Language Models
Alternates, Assemble! Selecting Optimal Alternates for Citizens' Assemblies
Argus Inspection: Do Multimodal Large Language Models Possess the Eye of Panoptes?
RAGtifier: Evaluating RAG Generation Approaches of State-of-the-Art RAG Systems for the SIGIR LiveRAG Competition
Unsupervised Document and Template Clustering using Multimodal Embeddings
Saturation Self-Organizing Map
CulturalFrames: Assessing Cultural Expectation Alignment in Text-to-Image Models and Evaluation Metrics
To Judge or not to Judge: Using LLM Judgements for Advertiser Keyphrase Relevance at eBay
Edge-Cloud Collaborative Computing on Distributed Intelligence and Model Optimization: A Survey
Mj\"olnir: A Deep Learning Parametrization Framework for Global Lightning Flash Density
Federated Learning: A Survey on Privacy-Preserving Collaborative Intelligence
Democracy of AI Numerical Weather Models: An Example of Global Forecasting with FourCastNetv2 Made by a University Research Lab Using GPU
Retrieval-Augmented Generation with Conflicting Evidence
SPIE: Semantic and Structural Post-Training of Image Editing Diffusion Models with AI フィードバック
Evaluating Trust in AI, Human, and Co-produced Feedback Among Undergraduate Students
ProtoECGNet: Case-Based Interpretable Deep Learning for Multi-Label ECG Classification with Contrastive Learning
ChatBench: From Static Benchmarks to Human-AI Evaluation
Adaptive Computation Pruning for the Forgetting Transformer
AI-induced sexual harassment: Investigating Contextual Characteristics and User Reactions of Sexual Harassment by a Companion Chatbot
CrossWordBench: Evaluating the Reasoning Capabilities of LLMs and LVLMs with Controllable Puzzle Generation
Opioid Named Entity Recognition (ONER-2025) から Reddit
OSMa-Bench: Evaluating Open Semantic Mapping Under Varying Lighting Conditions
TIDE: Temporal-Aware Sparse Autoencoders for Interpretable Diffusion Transformers in Image Generation
Flexible Prefrontal Control over Hippocampal Episodic Memory for Goal-Directed Generalization
EvoP: Robust LLM Inference via Evolutionary Pruning
Sleepless Nights, Sugary Days: Creating Synthetic Users with Health Conditions for Realistic Coaching Agent Interactions
Zero-shot Emotion Annotation in Facial Images Using Large Multimodal Models: Benchmarking and Prospects for Multi-Class, Multi-Frame Approaches
PAR-AdvGAN: Improving Adversarial Attack Capability with Progressive Auto-Regression AdvGAN
Forget the Data and Fine-Tuning! Just Fold the Network to Compress
FBFL: A Field-Based Coordination Approach for Data Heterogeneity in Federated Learning
Decoding-based Regression
AdEval: Alignment-based Dynamic Evaluation to Mitigate Data Contamination in Large Language Models
Chemist-aligned retrosynthesis by ensembling diverse inductive bias models
Adaptive Informed Deep Neural Networks for Power Flow Analysis
A Risk Taxonomy and Reflection Tool for Large Language Model Adoption in Public Health
Learning Marmoset Vocal Patterns with a Masked Autoencoder for Robust Call Segmentation, Classification, and Caller Identification
Dynamic Spectrum Access for Ambient Backscatter Communication-assisted D2D Systems with Quantum Reinforcement Learning
Zero-Shot Generalization of Vision-Based RL Without Data Augmentation
Hypergraph-based Motion Generation with Multi-modal Interaction Relational Reasoning
3DFacePolicy: Audio-Driven 3D Facial Animation Based on Action Control
Return Prediction for Mean-Variance Portfolio Selection: How Decision-Focused Learning Shapes Forecasting Models
OE3DIS: Open-Ended 3D Point Cloud Instance Segmentation
VisionUnite: A Vision-Language Foundation Model for Ophthalmology Enhanced with Clinical Knowledge
DreamStory: Open-Domain Story Visualization by LLM-Guided Multi-Subject Consistent Diffusion
MEReQ: Max-Ent Residual-Q Inverse RL for Sample-Efficient Alignment from Intervention
Multidimensional Adaptive Coefficient for Inference Trajectory Optimization in Flow and Diffusion
AIOS: LLM Agent Operating System
Keep Your Friends Close: Leveraging Affinity Groups to Accelerate AI Inference Workflows
From Lab to Field: Real-World Evaluation of an AI-Driven Smart Video Solution to Enhance Community Safety
BELLA: Black box model Explanations by Local Linear Approximations
Artificial Intelligence Software Structured to Simulate Human Working Memory, Mental Imagery, and Mental Continuity
Fitting Description Logic Ontologies to ABox and Query Examples
Interpreting Fedspeak with Confidence: A LLM-Based Uncertainty-Aware Framework Guided by Monetary Policy Transmission Paths
Designing a Feedback-Driven Decision Support System for Dynamic Student Intervention
Large Language Models Do Not Simulate Human Psychology
IRL-VLA: Training an Vision-Language-Action Policy via Reward World Model
InfiAlign: A Scalable and Sample-Efficient Framework for Aligning LLMs to Enhance Reasoning Capabilities
SEAgent: Self-Evolving Computer Use Agent with Autonomous Learning from Experience
Trainable Dynamic Mask Sparse Attention
Edge-Based Multimodal Sensor Data Fusion with Vision Language Models (VLMs) for Real-time Autonomous Vehicle Accident Avoidance
Cognitive Kernel-Pro: A Framework for Deep Research Agents and Agent Foundation Models Training
Probabilistic Active Goal Recognition
When Imitation Learning Outperforms Reinforcement Learning in Surgical Action Planning
Effort-aware Fairness: Incorporating a Philosophy-informed, Human-centered Notion of Effort into Algorithmic Fairness Metrics
UnrealZoo: Enriching Photo-realistic Virtual Worlds for Embodied AI
System~2 Reasoning for Human--AI Alignment: Generality and Adaptivity via ARC-AGI
Time Is a Feature: Exploiting Temporal Dynamics in Diffusion Language Models
Training-Free Text-Guided Color Editing with Multi-Modal Diffusion Transformer
Towards Universal Neural Inference
SPARC: Soft Probabilistic Adaptive multi-interest Retrieval Model via Codebooks for recommender system
Dynamic Uncertainty-aware Multimodal Fusion for Outdoor Health Monitoring
Can We Trust AI to Govern AI? Benchmarking LLM Performance on Privacy and AI Governance Exams
Spatial Traces: Enhancing VLA Models with Spatial-Temporal Understanding
E3-Rewrite: Learning to Rewrite SQL for Executability, Equivalence,and Efficiency
When Deepfakes Look Real: Detecting AI-Generated Faces with Unlabeled Data due to Annotation Challenges
Attacks and Defenses Against LLM Fingerprinting
LyS at SemEval 2025 Task 8: Zero-Shot Code Generation for Tabular QA
Retrospective Sparse Attention for Efficient Long-Context Generation
Rational Inverse Reasoning
Load more
Chain of Thought Still Thinks Fast: APriCoT Helps with Thinking Slow
Created by
Haebom
作者
Kyle Moore, Jesse Roberts, Thao Pham, Douglas Fisher
概要
この論文では、大規模マルチタスク言語理解(MMLU)の課題における言語モデルの偏りが回答選択の好みに与える影響を調べます。研究の結果、言語モデルの偏りはモデルの好みを予測し、思考過程(CoT)推論を使用しても人間の試験受験戦略を反映することがわかりました。この問題を解決するために、著者は反射実績のプロンプトと無差別に準備されたCoT(APriCoT)を導入しました。 CoTを使用した反射実績プロンプトだけでは偏向を軽減するのに十分ではありませんが、APriCoTは基礎確率の影響を効果的に減らし、全体的な精度を向上させることを示しています。 CoTは、いくつかのプロンプト方法論の下で迅速な思考モデルの偏向を強化する傾向があるため、偏向緩和には遅い思考プロセスが必要であることを示唆しています。 APriCoTは、より堅牢で公正な「ゆっくりと考える」言語モデルを開発するための一歩です。
Takeaways、Limitations
•
Takeaways:
◦
言語モデルの偏りがMMLUのような課題における回答の選択に大きな影響を与えることを明らかにした。
◦
CoTだけではモデルの偏りを完全に解決できず、「遅い思考」のプロセスが必要であることを示唆。
◦
APriCoTが従来の方法よりも偏りを効果的に低減し、精度を向上させることを実証。
•
Limitations:
◦
APriCoTの効果があらゆる種類のバイアスまたはすべての言語モデルに一般化できるかどうかに関するさらなる研究が必要です。
◦
APriCoTの計算コストと効率に関するさらなる分析が必要
◦
「遅い事故」の定義と測定の明確な基準が不足しています。
PDFを見る
Made with Slashpage