This paper proposes a Monte Carlo Tree Search (MCTS) method with parallel updates for a multi-agent Markov game with a limited horizon and time discounting setting to solve lateral and longitudinal collaborative decision-making problems in multi-vehicle cooperative driving of Connected and Automated Vehicles (CAVs). By analyzing parallel behaviors in the multi-vehicle collaborative action space under partially steady-state traffic flow, the parallel update method increases search depth without sacrificing search breadth by quickly excluding potentially risky actions. The proposed method is tested on multiple randomly generated traffic flows, and experimental results demonstrate excellent robustness and outperform state-of-the-art reinforcement learning algorithms and heuristic methods. The vehicle driving strategy using the proposed algorithm demonstrates rationality superior to that of human drivers, and it improves traffic efficiency and safety in coordination zones.