This paper presents results from the Plain Language Adaptation of Biomedical Abstracts (PLABA) track at the Text Retrieval Conferences in 2023 and 2024. The PLABA track focuses on converting technical abstracts of medical papers into plain language that is easy for the general public to understand. A variety of models ranging from multilayer perceptrons to pre-trained giant language models (LLMs) are evaluated on two tasks: Task 1: rewriting the entire abstract and Task 2: identifying and replacing difficult terms. In Task 1, top-tier models achieve expert-level accuracy and completeness, but lack conciseness and clarity, and automated evaluation metrics have poor correlation with manual evaluation. In Task 2, LLM-based systems struggle with identifying difficult terms and classifying replacement methods, but perform well in generating replacement terms in terms of accuracy, completeness, and conciseness.