/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
Structure Transfer: an Inference-Based Calculus for the Transformation of Representations
Ensemble of Pathology Foundation Models for MIDOG 2025 Track 2: Atypical Mitosis Classification
AudioCodecBench: A Comprehensive Benchmark for Audio Codec Evaluation
Understanding Space Is Rocket Science - Only Top Reasoning Models Can Solve Spatial Understanding Tasks
DaMoC: Efficiently Selecting the Optimal Large Language Model for Fine-tuning Domain Tasks Based on Data and Model Compression
Modular Techniques for Synthetic Long-Context Data Generation in Language Model Training and Evaluation
EZhouNet:A framework based on graph neural network and anchor interval for the respiratory sound event detection
AImoclips: A Benchmark for Evaluating Emotion Conveyance in Text-to-Music Generation
TimeCopilot
First Order Model-Based RL through Decoupled Backpropagation
Pilot Study on Generative AI and Critical Thinking in Higher Education Classrooms
Beacon: Post-Training Quantization with Integrated Grid Selection
Is Artificial Intelligence Reshaping the Landscape of the International Academic Community of Geosciences?
Vectorized Attention with Learnable Encoding for Quantum Transformer
Transplant Then Regenerate: A New Paradigm for Text Data Augmentation
Depth-Breadth Synergy in RLVR: Unlocking LLM Reasoning Gains with Adaptive Exploration
MultiGen: Child-Friendly Multilingual Speech Generator with LLMs
StreetViewAI: Making Street View Accessible Using Context-Aware Multimodal AI
Street-Level AI: Are Large Language Models Ready for Real-World Judgments?
The KG-ER Conceptual Schema Language
LOTS of Fashion! Multi-Conditioning for Image Generation via Sketch-Text Pairing
Conditional Video Generation for High-Efficiency Video Compression
TriCLIP-3D: A Unified Parameter-Efficient Framework for Tri-Modal 3D Visual Grounding based on CLIP
Demographic-aware fine-grained classification of pediatric wrist fractures
An Analysis of Action-Value Temporal-Difference Methods That Learn State Values
Stochastic Parameter Decomposition
Auto-Regressive vs Flow-Matching: a Comparative Study of Modeling Paradigms for Text-to-Music Generation
MiniCPM4: Ultra-Efficient LLMs on End Devices
Evaluating the Efficacy of LLM-Based Reasoning for Multiobjective HPC Job Scheduling
How Can I Publish My LLM Benchmark Without Giving the True Answers Away?
Optimization of Module Transferability in Single Image Super-Resolution: Universality Assessment and Cycle Residual Blocks
Transferable Mask Transformer: Cross-domain Semantic Segmentation with Region-adaptive Transferability Estimation
RBT4DNN: Requirements-based Testing of Neural Networks
Robust Offline Imitation Learning Through State-level Trajectory Stitching
Beyond holography: the entropic quantum gravity foundations of image processing
KNighter: Transforming Static Analysis with LLM-Synthesized Checkers
FRIDA to the Rescue! Analyzing Synthetic Data Effectiveness in Object-Based Common Sense Reasoning for Disaster Response
CoDiff: Conditional Diffusion Model for Collaborative 3D Object Detection
Rapid Word Learning Through Meta In-Context Learning
Image Embedding Sampling Method for Diverse Captioning
Is an Ultra Large Natural Image-Based Foundation Model Superior to a Retina-Specific Model for Detecting Ocular and Systemic Diseases?
Extended Histogram-based Outlier Score (EHBOS)
A Survey of Graph Retrieval-Augmented Generation for Customized Large Language Models
Breaking the Context Bottleneck on Long Time Series Forecasting
Defending LVLMs Against Vision Attacks through Partial-Perception Supervision
ACING: Actor-Critic for Instruction Learning in Black-Box LLMs
Kolb-Based Experiential Learning for Generalist Agents with Human-Level Kaggle Data Science Performance
Quantifying Calibration Error in Neural Networks Through Evidence-Based Theory
Robust training of implicit generative models for multivariate and heavy-tailed distributions with an invariant statistical loss
Learning from 10 Demos: Generalisable and Sample-Efficient Policy Learning with Oriented Affordance Frames
AutoPETIII:The Tracer Frontier。 What Frontier?
Long Input Sequence Network for Long Time Series Forecasting
FFHFlow: Diverse and Uncertainty-Aware Dexterous Grasp Generation via Flow Variational Inference
Unisolver: PDE-Conditional Transformers Towards Universal Neural PDE Solvers
MTP: A Meaning-Typed Language Abstraction for AI-Integrated Programming
Diffusion on language model encodings for protein sequence generation
Style Transfer to Calvin and Hobbes comics using Stable Diffusion
Autonomation, Not Automation: Activities and Needs of European Fact-checkers as a Basis for Designing Human-Centered AI Systems
Plan Verification for LLM-Based Embodied Task Completion Agents
EigenBench: A Comparative Behavioral Measure of Value Alignment
Oyster-I:Beyond Refusal - Constructive Safety Alignment for Responsible Language Models
Extending FKG.in: Towards a Food Claim Traceability Network
DeepVIS: Bridging Natural Language and Data Visualization Through Step-wise Reasoning
Theory of Mind Using Active Inference: A Framework for Multi-Agent Cooperation
CP-Bench: Evaluating Large Language Models for Constraint Modelling
Axiomatics of Restricted Choices by Linear Orders of Sets with Minimum as Fallback
DMN-Guided Prompting: A Framework for Controlling LLM Behavior
Computational Basis of LLM's Decision Making in Social Simulation
Science Across Languages: Assessing LLM Multilingual Translation of Scientific Papers
Enhancing FKG.in: automating Indian food composition analysis
WASP: A Weight-Space Approach to Detecting Learned Spuriousness
Transferable Belief Model on Quantum Circuits
PIN: A Knowledge-Intensive Dataset for Paired and Interleaved Multimodal Documents
(Ir)rationality in AI: State of the Art, Research Challenges and Open Questions
Intelligence Primer
ChronoGraph: A Real-World Graph-Based Multivariate Time Series Dataset
Delta Activations: A Representation for Finetuned Large Language Models
DEXOP: A Device for Robotic Transfer of Dexterous Human Manipulation
Towards a Unified View of Large Language Model Post-Training
No Thoughts Just AI: Biased LLM Recommendations Limit Human Agency in Resume Screening
IPA: An Information-Preserving Input Projection Framework for Efficient Foundation Model Adaptation
SSGaussian: Semantic-Aware and Structure-Preserving 3D Style Transfer
Parking Availability Prediction via Fusing Multi-Source Data with A Self-Supervised Learning Enhanced Spatio-Temporal Inverted Transformer
PARCO: Phoneme-Augmented Robust Contextual ASR via Contrastive Entity Disambiguation
AUDETER: A Large-scale Dataset for Deepfake Audio Detection in Open Worlds
From Editor to Dense Geometry Estimator
Decoupled Entity Representation Learning for Pinterest Ads Ranking
Facts Fade Fast: Evaluating Memorization of Outdated Medical Knowledge in Large Language Models
HumAIne-Chatbot: Real-Time Personalized Conversational AI via Reinforcement Learning
Reinforcement Learning for Robust Ageing-Aware Control of Li-ion Battery Systems with Data-Driven Formal Verification
An Empirical Study of Vulnerabilities in Python Packages and Their Detection
How many patients could we save with LLM priors?
Learning Active Perception via Self-Evolving Preference Optimization for GUI Grounding
MAGneT: Coordinated Multi-Agent Generation of Synthetic Multi-Turn Mental Health Counseling Sessions
VisioFirm: Cross-Platform AI-assisted Annotation Tool for Computer Vision
Crossing the Species Divide: Transfer Learning from Speech to Animal Sounds
YOLO Ensemble for UAV-based Multispectral Defect Detection in Wind Turbine Components
Attention as an Adaptive Filter
TAGAL: Tabular Data Generation using Agentic LLM メソッド
Enhancing Technical Documents Retrieval for RAG
Load more
The Ramon Llull's Thinking Machine for Automated Ideation
Created by
Haebom
作者
Xinran Zhao, Boyuan Zheng, Chenglei Si, Haofei Yu, Ken Liu, Runlong Zhou, Ruochen Li, Tong Chen, Xiang Li, Yiming Zhang, Tongshuang Wu
概要
本論文では、ラモン率の「Ars combinatoria」を、現代の研究アイデア生成マシンを構築するための概念的な基盤として再解釈します。 3つの構成軸であるトピック(効率性、適応性)、領域(質問回答、機械翻訳など)、方法(敵対的トレーニング、線形アテンションなど)を定義し、科学的タスクでよく見られる同期、問題設定、技術的アプローチを高次元抽象化として表します。専門家や学術論文から要素を抽出し、キュレーションされた組み合わせで大規模言語モデル(LLM)をプロンプトして、多様で関連性があり、最新の文献に基づいた研究アイデアを生成します。この現代的な思考マシンは、科学的創造性を促進する軽量で解釈可能なツールを提供し、人間とAIの間の協力的なアイデアを作成するための方向性を提供します。
Takeaways、Limitations
•
Takeaways:
◦
LLMを活用した研究アイデア生成のための新しいフレームワークの提示
◦
ラモン率のArs combinatoriaを現代的文脈に適用した革新的なアプローチ
◦
軽量で解釈可能なツールによる科学的創造性の向上の可能性の提示
◦
人間とAIの間の協力的なアイデアを作成するための可能性の探求
•
Limitations:
◦
LLMの性能に依存し、LLMの偏りや限界が結果に影響を与える可能性があります。
◦
トピック、領域、方法の構成軸の事前定義およびキュレーションプロセスは主観的であり得る。
◦
生成されたアイデアの実際の研究価値と実行可能性の追加の検証が必要です。
◦
大規模なデータセットと専門家の知識への依存度が高い。
PDFを見る
Made with Slashpage