This paper explores the growing interest in learning meta-learning algorithms from data, replacing traditional manual design approaches, as a paradigm for improving the performance of machine learning systems. Meta-learning is particularly promising in reinforcement learning (RL), where supervised or unsupervised learning algorithms, often not optimized for reinforcement learning, are often applied. In this paper, we empirically compare various meta-learning algorithms, such as evolutionary algorithms for optimizing black-box functions or large-scale language models (LLMs) that propose code. We compare and analyze meta-learning algorithms applied to various RL pipelines, examining factors such as interpretability, sample cost, and training time in addition to meta-learning and meta-testing performance. Based on these results, we propose several guidelines for meta-learning new RL algorithms to maximize the performance of learned algorithms in the future.