[공지사항]을 빙자한 안부와 근황
Show more
/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
SystolicAttention: Fusing FlashAttention within a Single Systolic Array
Automated Novelty Evaluation of Academic Paper: A Collaborative Approach Integrating Human and Large Language Model Knowledge
"Is it always watching? Is it always listening?" Exploring Contextual Privacy and Security Concerns Toward Domestic Social Robots
A Group Theoretic Analysis of the Symmetries Underlying Base Addition and Their Learnability by Neural Networks
GHPO: Adaptive Guidance for Stable and Efficient LLM Reinforcement Learning
Extension OL-MDISF: Online Learning from Mix-Typed, Drifted, and Incomplete Streaming Features
When and Where do Data Poisons Attack Textual Inversion?
Truth Sleuth and Trend Bender: AI Agents to fact-check YouTube videos and influence opinions
NLP Meets the World: Toward Improving Conversations With the Public About Natural Language Processing Research
Accurate generation of chemical reaction transition states by conditional flow matching
Benchmarking and Evaluation of AI Models in Biology: Outcomes and Recommendations from the CZI Virtual Cells Workshop
A PBN-RL-XAI Framework for Discovering a "Hit-and-Run" Therapeutic Strategy in Melanoma
NeuTSFlow: Modeling Continuous Functions Behind Time Series Forecasting
THOR: Transformer Heuristics for On-Demand Retrieval
Towards Agentic RAG with Deep Reasoning: A Survey of RAG-Reasoning Systems in LLMs
ブリッジング Literature and the Universe Via A Multi-Agent Large Language Model System
Magneto-radiative modelling and artificial neural network optimization of biofluid flow in a stenosed arterial domain
Symbiosis: Multi-Adapter Inference and Fine-Tuning
Rethinking Data Protection in the (Generative) Artificial Intelligence Era
SoK: Semantic Privacy in Large Language Models
FedRef: Communication-Efficient Bayesian Fine Tuning with Reference Model
Predictable Scale: Part II, Farseer: A Refined Scaling Law in Large Language Models
Position Prediction Self-Supervised Learning for Multimodal Satellite Imagery Semantic Segmentation
ScaleRTL: Scaling LLMs with Reasoning Data and Test-Time Compute for Accurate RTL Code Generation
HueManity: Probing Fine-Grained Visual Perception in MLLMs
AKReF: An argumentative knowledge representation framework for structured argumentation
Large Language Models Often Know When They Are Being Evaluated
Dynamic Risk Assessments for Offensive Cybersecurity Agents
How Hungry is AI? Benchmarking Energy, Water, and Carbon Footprint of LLM Inference
Diffused Responsibility: Analyzing the Energy Consumption of Generative Text-to-Audio Diffusion Models
Flow-GRPO: Training Flow Matching Models via Online RL
On the Need for a Statistical Foundation in Scenario-Based Testing of Autonomous Vehicles
What's Pulling the Strings? Evaluating Integrity and Attribution in AI Training and Inference through Concept Shift
TD-EVAL: Revisiting Task-Oriented Dialogue Evaluation by Combining Turn-Level Precision with Dialogue-Level Comparisons
MobileCity: An Efficient Framework for Large-Scale Urban Behavior Simulation
Semantic Adapter for Universal Text Embeddings: Diagnosing and Mitigating Negation Blindness to Enhance Universality
Leveraging LLMs for User Stories in AI Systems: UStAI Dataset
Large Language Models are Unreliable for Cyber Threat Intelligence
AnnoPage Dataset: Dataset of Non-Textual Elements in Documents with Fine-Grained Categorization
A Thorough Assessment of the Non-IID Data Impact in Federated Learning
Visual Position Prompt for MLLM ベースの Visual Grounding
Neurons: Emulating the Human Visual Cortex Improves Fidelity and Interpretability in fMRI-to-Video Reconstruction
FADE: Why Bad Descriptions Happen to Good Features
FlipConcept: Tuning-Free Multi-Concept Personalization for Text-to-Image Generation
LUMINA-Net: Low-light Upgrade through Multi-stage Illumination and Noise Adaptation Network for Image Enhancement
Towards Geo-Culturally Grounded LLM Generations
Learning to Reason at the Frontier of Learnability
Flexible and Efficient Grammar-Constrained Decoding
PATCH: a deep learning method to assess heterogeneity of artistic practice in historical paintings
The Impact of Modern AI in Metadata Management
Learning an Effective Premise Retrieval Model for Efficient Mathematical Formalization
ChipAlign: Instruction Alignment in Large Language Models for Chip Design via Geodesic Interpolation
Many Objective Problems Where Crossover is Provably Essential
Patherea: Cell Detection and Classification for the 2020s
ViTally Consistent: Scaling Biological Representation Learning for Cell Microscopy
TextDestroyer: A Training- and Annotation-Free Diffusion Method for Destroying Anomal Text from Images
Quantifying calibration error in modern neural networks through evidence based theory
Multi-view biomedical foundation models for molecule-target and property prediction
Reinforced Imitative Trajectory Planning for Urban Automated Driving
Distilling Invariant Representations with Dual Augmentation
Tackling the Abstraction and Reasoning Corpus with Vision Transformers: the Importance of 2D Representation, Positions, and Objects
Linearly-Interpretable Concept Embedding Models for Text Analysis
Towards Understanding Link Predictor Generalizability Under Distribution Shifts
StreakNet-Arch: An Anti-scattering Network-based Architecture for Underwater Carrier LiDAR-Radar Imaging
Enhancing Trust in Autonomous Agents: An Architecture for Accountability and Explainability through Blockchain and Large Language Models
On the Statistical Properties of Generative Adversarial Models for Low Intrinsic Data Dimension
Programming Distributed Collective Processes in the eXchange Calculus
Holistic analysis on the sustainability of Federated Learning across AI product lifecycle
Mathematical Introduction to Deep Learning: Methods, Implementations, and Theory
Epic-Sounds: A Large-scale Dataset of Actions That Sound
From Semantic Web and MAS to Agentic AI: A Unified Narrative of the Web of Agents
On Gradual Semantics for Assumption-Based Argumentation
The Challenge of Teaching Reasoning to LLMs Without RL or Distillation
Continuous Classification Aggregation
Can Prompt Difficulty be Online Predicted for Accelerating RL Finetuning of Reasoning Models?
MacOSWorld: A Multilingual Interactive Benchmark for GUI Agents
GeoChain: Multimodal Chain-of-Thought for Geographic Reasoning
Lost in Transmission: When and Why LLMs Fail to Reason Globally
A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems
System 0/1/2/3: Quad-process theory for multi-timescale embodied collective cognitive systems
Practical Principles for AI Cost and Compute Accounting
Generative Emergent Communication: Large Language Model is a Collective World Model
Proactive Agents for Multi-Turn Text-to-Image Generation Under Uncertainty
Learning Lifted STRIPS Models from Action Traces Alone: A Simple, General, and Scalable Solution
Hallucination Detox: Sensitivity Dropout (SenD) for Large Language Model Training
Life, uh, Finds a Way: Hyperadaptability by Behavioral Search
Governance of Generative Artificial Intelligence for Companies
RACER: Rational Artificial Intelligence Car-following-model Enhanced by Reality
Artificial Intelligence Governance for Businesses
Interpreting Radiologist's Intention from Eye Movements in Chest X-ray Diagnosis
S2WTM: Spherical Sliced-Wasserstein Autoencoder for Topic Modeling
LLM-Based Config Synthesis requires Disambiguation
Characterizing State Space Model (SSM) and SSM-Transformer Hybrid Language Model Performance with Long Context Length
EgoVLA: Learning Vision-Language-Action Models from Egocentric Human Videos
Can We Predict Alignment Before Models Finish Thinking? Towards Monitoring Misaligned Reasoning Models
Unit-Based Histopathology Tissue Segmentation via Multi-Level Feature Representation
Advancing Retrieval-Augmented Generation for Structured Enterprise and Internal Data
Mixture of Raytraced Experts
QuRe: Query-Relevant Retrieval through Hard Negative Sampling in Composed Image Retrieval
AutoVDC: Automated Vision Data Cleaning Using Vision-Language Models
Load more
Tackling the Abstraction and Reasoning Corpus with Vision Transformers: the Importance of 2D Representation, Positions, and Objects
Created by
Haebom
作者
Wenhao Li, Yudong Xu, Scott Sanner, Elias Boutros Khalil
概要
この論文は、Vision Transformer(ViT)が抽象推論コーパス(ARC)ベンチマークでパフォーマンスの低下を示す理由を分析し、それを改善したViTARCモデルを提示します。既存のViTは、ARC課題で百万の例で学習しても、ほとんどの課題では失敗します。これは、ViTアーキテクチャの表現能力が不足しているためです。そこで研究者らは、ピクセル単位の入力表現、空間認識トークン化技術、自動分割を活用したオブジェクトベースの位置エンコードなどを導入したViTARCを提案する。 ViTARCは、マップ学習だけで400の公開ARC課題の半分以上で100%に近い解決率を達成し、豊富なデータとノイズのないマッピングにも抽象視覚推論のための適切な帰納的偏向が重要であることを示唆している。
Takeaways、Limitations
•
Takeaways:
◦
ViTアーキテクチャの表現能力限界を明らかにし、抽象視覚推論のための適切な帰納的偏向の重要性を強調する。
◦
ViTARCモデルは、豊富なデータとノイズのないマッピング条件でも高い性能を達成し、トランスベースの視覚的推論研究の新しい基盤を提供します。
◦
ピクセル単位の入力表現、空間認識トークン化、オブジェクトベースの位置エンコーディングなどの技術が、視覚的推論性能の向上に有効であることを示す。
•
Limitations:
◦
ViTARCモデルはARCベンチマークに特化しており、他の視覚的推論課題の一般化性能にはさらなる研究が必要です。
◦
すべてのARC課題で100%に近い性能を達成できなかった。 (半分以上の課題でのみ達成)
◦
提案された改善が他の変圧器ベースのアーキテクチャにも適用可能であるかどうかに関するさらなる研究が必要である。
PDFを見る
Made with Slashpage