This paper evaluates the feasibility of building Bayesian networks (BNs) using large-scale language models (LLMs). While LLMs have demonstrated potential as factual knowledge bases, their ability to generate probabilistic knowledge about real-world events remains underexplored. This study explores how to leverage the probabilistic knowledge inherent in LLMs to derive probability estimates of statements about events and their relationships within BNs. LLMs enable parameterization of BNs, enabling probabilistic modeling within specific domains. Experiments on 80 publicly available BNs ranging from healthcare to finance demonstrate that querying LLMs for the conditional probabilities of events yields meaningful results compared to baselines, including random and uniform distributions and approaches based on the probability of the next token generation. Specifically, we explore how distributions extracted from LLMs can be used as expert priors to improve data-derived distributions, particularly when data is scarce. Overall, this study presents a promising strategy for automatically constructing Bayesian networks by combining probabilistic knowledge extracted from LLMs with real-world data. Furthermore, it establishes the first comprehensive baseline for evaluating the performance of LLMs in probabilistic knowledge extraction.