In this paper, we present MML-SurgAdapt, an integrated multi-task framework for handling various tasks in surgical procedures, such as step recognition or safety-critical aspect assessment in laparoscopic cholecystectomy. We use the Vision-Language Model (VLM), specifically CLIP, to handle various surgical tasks with natural language supervision. To address the partial annotation problem, we apply Single Positive Multi-Label (SPML) learning to integrate data from multiple surgical tasks and enable effective learning even with incomplete or noisy annotations. Experimental results using the Cholec80, Endoscapes2023, and CholecT50 datasets show that MML-SurgAdapt performs similarly to task-specific benchmarks and has the advantage of handling noisy annotations. It also outperforms existing SPML frameworks and significantly reduces the annotation burden by reducing the required labels by 23%. This is the first application of SPML to integrate data from multiple surgical tasks and presents a novel generalizable solution for multi-task learning in surgical computer vision.