This paper studies distributed optimization problems in graph theories with continuous nodes, which are considered the limit of distributed network optimization as the number of nodes grows infinitely. Each node has a private local cost function, and the global cost function that all nodes collaboratively minimize is the integral of the local cost function over the node set. We propose stochastic gradient descent and gradient tracking algorithms in graph theories. We present a general lemma on upper bound estimation for a class of time-varying differential inequalities with negative linear terms, and based on this, we prove that the second moments of node states are uniformly bounded for both algorithms. In particular, for the stochastic gradient tracking algorithm, we transform convergence analysis into asymptotic properties of coupled nonlinear differential inequalities with time-varying coefficients and develop a decoupling method. We show that both algorithms achieve $\mathcal{L}^{\infty}$-consensus for graph theories where all nodes are connected by appropriately selecting the time-varying algorithm gain. Additionally, when the local cost function is strongly convex, the states of all nodes converge to the minimum of the global cost function, and the auxiliary states of the probabilistic gradient tracing algorithm uniformly converge in the mean square to the gradient value at the minimum of the global cost function.