[공지사항]을 빙자한 안부와 근황
Show more
/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
SystolicAttention: Fusing FlashAttention within a Single Systolic Array
Automated Novelty Evaluation of Academic Paper: A Collaborative Approach Integrating Human and Large Language Model Knowledge
"Is it always watching? Is it always listening?" Exploring Contextual Privacy and Security Concerns Toward Domestic Social Robots
A Group Theoretic Analysis of the Symmetries Underlying Base Addition and Their Learnability by Neural Networks
GHPO: Adaptive Guidance for Stable and Efficient LLM Reinforcement Learning
Extension OL-MDISF: Online Learning from Mix-Typed, Drifted, and Incomplete Streaming Features
When and Where do Data Poisons Attack Textual Inversion?
Truth Sleuth and Trend Bender: AI Agents to fact-check YouTube videos and influence opinions
NLP Meets the World: Toward Improving Conversations With the Public About Natural Language Processing Research
Accurate generation of chemical reaction transition states by conditional flow matching
Benchmarking and Evaluation of AI Models in Biology: Outcomes and Recommendations from the CZI Virtual Cells Workshop
A PBN-RL-XAI Framework for Discovering a "Hit-and-Run" Therapeutic Strategy in Melanoma
NeuTSFlow: Modeling Continuous Functions Behind Time Series Forecasting
THOR: Transformer Heuristics for On-Demand Retrieval
Towards Agentic RAG with Deep Reasoning: A Survey of RAG-Reasoning Systems in LLMs
ブリッジング Literature and the Universe Via A Multi-Agent Large Language Model System
Magneto-radiative modelling and artificial neural network optimization of biofluid flow in a stenosed arterial domain
Symbiosis: Multi-Adapter Inference and Fine-Tuning
Rethinking Data Protection in the (Generative) Artificial Intelligence Era
SoK: Semantic Privacy in Large Language Models
FedRef: Communication-Efficient Bayesian Fine Tuning with Reference Model
Predictable Scale: Part II, Farseer: A Refined Scaling Law in Large Language Models
Position Prediction Self-Supervised Learning for Multimodal Satellite Imagery Semantic Segmentation
ScaleRTL: Scaling LLMs with Reasoning Data and Test-Time Compute for Accurate RTL Code Generation
HueManity: Probing Fine-Grained Visual Perception in MLLMs
AKReF: An argumentative knowledge representation framework for structured argumentation
Large Language Models Often Know When They Are Being Evaluated
Dynamic Risk Assessments for Offensive Cybersecurity Agents
How Hungry is AI? Benchmarking Energy, Water, and Carbon Footprint of LLM Inference
Diffused Responsibility: Analyzing the Energy Consumption of Generative Text-to-Audio Diffusion Models
Flow-GRPO: Training Flow Matching Models via Online RL
On the Need for a Statistical Foundation in Scenario-Based Testing of Autonomous Vehicles
What's Pulling the Strings? Evaluating Integrity and Attribution in AI Training and Inference through Concept Shift
TD-EVAL: Revisiting Task-Oriented Dialogue Evaluation by Combining Turn-Level Precision with Dialogue-Level Comparisons
MobileCity: An Efficient Framework for Large-Scale Urban Behavior Simulation
Semantic Adapter for Universal Text Embeddings: Diagnosing and Mitigating Negation Blindness to Enhance Universality
Leveraging LLMs for User Stories in AI Systems: UStAI Dataset
Large Language Models are Unreliable for Cyber Threat Intelligence
AnnoPage Dataset: Dataset of Non-Textual Elements in Documents with Fine-Grained Categorization
A Thorough Assessment of the Non-IID Data Impact in Federated Learning
Visual Position Prompt for MLLM ベースの Visual Grounding
Neurons: Emulating the Human Visual Cortex Improves Fidelity and Interpretability in fMRI-to-Video Reconstruction
FADE: Why Bad Descriptions Happen to Good Features
FlipConcept: Tuning-Free Multi-Concept Personalization for Text-to-Image Generation
LUMINA-Net: Low-light Upgrade through Multi-stage Illumination and Noise Adaptation Network for Image Enhancement
Towards Geo-Culturally Grounded LLM Generations
Learning to Reason at the Frontier of Learnability
Flexible and Efficient Grammar-Constrained Decoding
PATCH: a deep learning method to assess heterogeneity of artistic practice in historical paintings
The Impact of Modern AI in Metadata Management
Learning an Effective Premise Retrieval Model for Efficient Mathematical Formalization
ChipAlign: Instruction Alignment in Large Language Models for Chip Design via Geodesic Interpolation
Many Objective Problems Where Crossover is Provably Essential
Patherea: Cell Detection and Classification for the 2020s
ViTally Consistent: Scaling Biological Representation Learning for Cell Microscopy
TextDestroyer: A Training- and Annotation-Free Diffusion Method for Destroying Anomal Text from Images
Quantifying calibration error in modern neural networks through evidence based theory
Multi-view biomedical foundation models for molecule-target and property prediction
Reinforced Imitative Trajectory Planning for Urban Automated Driving
Distilling Invariant Representations with Dual Augmentation
Tackling the Abstraction and Reasoning Corpus with Vision Transformers: the Importance of 2D Representation, Positions, and Objects
Linearly-Interpretable Concept Embedding Models for Text Analysis
Towards Understanding Link Predictor Generalizability Under Distribution Shifts
StreakNet-Arch: An Anti-scattering Network-based Architecture for Underwater Carrier LiDAR-Radar Imaging
Enhancing Trust in Autonomous Agents: An Architecture for Accountability and Explainability through Blockchain and Large Language Models
On the Statistical Properties of Generative Adversarial Models for Low Intrinsic Data Dimension
Programming Distributed Collective Processes in the eXchange Calculus
Holistic analysis on the sustainability of Federated Learning across AI product lifecycle
Mathematical Introduction to Deep Learning: Methods, Implementations, and Theory
Epic-Sounds: A Large-scale Dataset of Actions That Sound
From Semantic Web and MAS to Agentic AI: A Unified Narrative of the Web of Agents
On Gradual Semantics for Assumption-Based Argumentation
The Challenge of Teaching Reasoning to LLMs Without RL or Distillation
Continuous Classification Aggregation
Can Prompt Difficulty be Online Predicted for Accelerating RL Finetuning of Reasoning Models?
MacOSWorld: A Multilingual Interactive Benchmark for GUI Agents
GeoChain: Multimodal Chain-of-Thought for Geographic Reasoning
Lost in Transmission: When and Why LLMs Fail to Reason Globally
A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems
System 0/1/2/3: Quad-process theory for multi-timescale embodied collective cognitive systems
Practical Principles for AI Cost and Compute Accounting
Generative Emergent Communication: Large Language Model is a Collective World Model
Proactive Agents for Multi-Turn Text-to-Image Generation Under Uncertainty
Learning Lifted STRIPS Models from Action Traces Alone: A Simple, General, and Scalable Solution
Hallucination Detox: Sensitivity Dropout (SenD) for Large Language Model Training
Life, uh, Finds a Way: Hyperadaptability by Behavioral Search
Governance of Generative Artificial Intelligence for Companies
RACER: Rational Artificial Intelligence Car-following-model Enhanced by Reality
Artificial Intelligence Governance for Businesses
Interpreting Radiologist's Intention from Eye Movements in Chest X-ray Diagnosis
S2WTM: Spherical Sliced-Wasserstein Autoencoder for Topic Modeling
LLM-Based Config Synthesis requires Disambiguation
Characterizing State Space Model (SSM) and SSM-Transformer Hybrid Language Model Performance with Long Context Length
EgoVLA: Learning Vision-Language-Action Models from Egocentric Human Videos
Can We Predict Alignment Before Models Finish Thinking? Towards Monitoring Misaligned Reasoning Models
Unit-Based Histopathology Tissue Segmentation via Multi-Level Feature Representation
Advancing Retrieval-Augmented Generation for Structured Enterprise and Internal Data
Mixture of Raytraced Experts
QuRe: Query-Relevant Retrieval through Hard Negative Sampling in Composed Image Retrieval
AutoVDC: Automated Vision Data Cleaning Using Vision-Language Models
Load more
Adaptive Elicitation of Latent Information Using Natural Language
Created by
Haebom
作者
Jimmy Wang, Thomas Zollo, Richard Zemel, Hongseok Namkoong
概要
本論文は、潜在的な個体の不確実性を減らすために情報を収集する適応的な質問戦略を提示します。大規模言語モデル(LLM)の一般化能力と世界の知識を活用して、将来の観測をシミュレートするメタ学習言語モデルを通じて不確実性を定量化します。自己回帰前方シミュレーションにより、新しい質問が認識論的不確実性をどれだけ減らすかを測定し、最も有益な次の質問を選択する洗練された情報収集戦略を開発します。 20 質問ゲーム、動的世論調査、適応型学生評価実験では、従来の方法よりもパフォーマンスが優れていることを示しています。
Takeaways、Limitations
•
Takeaways:
◦
LLMを活用した適応情報収集戦略の有効性を実証的に示す
◦
メタ学習ベースの言語モデルによる複雑な自然言語状況における不確実性を効果的に定量化する方法を提示
◦
自己回帰前方シミュレーションによる情報収集戦略の最適化の可能性を実証
◦
多様な応用分野(学生評価、疾病診断、ユーザー好み学習など)に適用可能性を示唆。
•
Limitations:
◦
メタ学習言語モデルの学習データと構造の詳細な説明の欠如。
◦
実験環境の特殊性による一般化の可能性の更なる検証の必要性
◦
複雑で高価な質問生成プロセスの効率改善が必要
◦
抽象的な潜在的オブジェクトの確率的モデリングの難しさに対する明確な解決策の提示の欠如
PDFを見る
Made with Slashpage