This paper presents a novel challenge: "tool unlearning," which removes learning about specific tools from tool-based LLMs. Unlike conventional unlearning, this approach requires removing knowledge itself, rather than individual samples. This approach presents challenges, including the high cost of LLM optimization and the need for a principled evaluation metric. To address these challenges, we propose ToolDelete, the first approach to effectively unlearn tools in tool-based LLMs. ToolDelete implements three key properties for effective tool unlearning and introduces a novel Membership Inference Attack (MIA) model for effective evaluation. Extensive experiments on various tool-training datasets and tool-based LLMs demonstrate that ToolDelete effectively unlearns randomly selected tools while preserving the LLM's knowledge of the remaining tools and its performance on common tasks.