This paper proposes a hybrid Bayesian optimization (BO) framework leveraging large-scale language models (LLMs) to address multivariate optimization problems coupled with slow and arduous experimental measurements. To address the problem of local minima in nonconvex optimization environments, the LLM efficiently combines probabilistic inference and domain knowledge by providing insights based on domain knowledge to suggest promising regions of the search space. This approach enhances user engagement by providing real-time commentary on the optimization process, and demonstrates performance improvements on synthetic benchmarks with up to 15 independent variables and four real-world experimental tasks.