This paper highlights that the effectiveness of recently published sophisticated learning techniques and model architectures in graph neural networks (GNNs) for link prediction can be exaggerated when compared to older baseline models. To address this, we systematically explore GAEs by applying model-independent techniques used in state-of-the-art methods to graph autoencoders (GAEs) and tuning hyperparameters. We find that well-tuned GAEs perform similarly to recent sophisticated models while offering superior computational efficiency. Specifically, we achieve significant performance gains on datasets with dominant structural information and limited feature data, achieving a state-of-the-art Hits@100 score of 78.41% on the ogbl-ppa dataset. We also analyze the impact of various techniques to elucidate the reasons for their success and suggest future directions. This study highlights the need to update baseline models to more accurately assess the progress of GNNs for link prediction.