This paper presents a novel approach for parameter exploration of latent variable causal models. Specifically, we propose a graph structure that is stable against marginalization in Gaussian Bayesian networks and, for the first time, reveal the duality between parameter optimization of latent variable models and feed-forward neural network training. Building on this, we develop an algorithm that optimizes the parameters of the graph structure using observed distributions and provide conditions for the identifiability of causal effects in Gaussian settings. Furthermore, we propose a meta-algorithm for verifying the identifiability of causal effects, laying the foundation for generalizing the duality between neural networks and causal models beyond the Gaussian distribution to other distributions.