This paper focuses on reinforcement learning (RL), especially, in a context where learning meta-learning algorithms from data instead of the conventional manual design method is gaining attention as a paradigm for improving the performance of machine learning systems. Reinforcement learning algorithms are often derived from suboptimal supervised or unsupervised learning, but meta-learning offers a possibility to solve this problem. This study experimentally compares and analyzes different meta-learning algorithms, such as evolutionary algorithms for black-box function optimization and large-scale language models (LLMs) for code suggestion, applied to various RL pipelines. In addition to the meta-learning and meta-testing performance, we investigate factors such as interpretability, sample cost, and training time, and propose some guidelines for meta-learning more performant RL algorithms in the future.