This paper explores strategies for imbuing learnable, updatable, and lifelong procedural memory to address the fragile procedural memory problem of large-scale language model (LLM)-based agents. We propose a novel method, Memp, which extracts the agent's past trajectories into fine-grained step-by-step instructions and high-level script-like abstractions. We explore the impact of various strategies for building, retrieving, and updating procedural memory, and construct a memory repository that evolves with new experiences through a dynamic system that continuously updates, modifies, and discards its contents. Experimental results on TravelPlanner and ALFWorld demonstrate that as the memory repository is refined, the agent's success rate and efficiency on similar tasks steadily improve. Moreover, procedural memory built on a strong model retains its value, leading to significant performance improvements even when migrating to a weaker model.