This paper presents a formulation of multi-task Gaussian Process (MTGP) for Bayesian Optimization (BO) considering dependencies between multiple outputs, along with a friendly derivation of its gradient. Gaussian Process (GP) is widely used in machine learning, but the formulation and gradient derivation of MTGP considering dependencies between multiple outputs are not fully understood in the existing literature. This paper aims to overcome this difficulty and help in understanding MTGP.