This paper investigates whether in-context learning (ICL) of large-scale language models (LLMs) performs structural inference consistent with a Bayesian framework, or relies on pattern matching. Using a controlled environment of biased coin tosses, we find that (1) LLMs often have biased priors, leading to initial differences in the zero-shot setting, (2) in-context evidence overrides explicit bias indications, (3) LLMs largely follow Bayesian posterior probability updates, but the differences are mainly due to miscalibrated priors and not update artifacts, and (4) attention size has a negligible effect on Bayesian inference. Given sufficient biased coin toss demonstrations via ICL, LLMs update their priors in a Bayesian manner.