This paper addresses the problem of applying privacy minimization principles in recommender systems. Applying privacy minimization principles is challenging because recommender systems rely on vast amounts of personal data. This paper conducts a feasibility study on implicit feedback inference data minimization. We present a novel problem definition, analyze various minimization techniques, and investigate key factors influencing their effectiveness. We demonstrate that significant inference data reduction without performance degradation is technically feasible. However, its practicality depends heavily on technical settings (e.g., performance objectives, model selection) and user characteristics (e.g., history size, preference complexity). Therefore, while demonstrating technical feasibility, we conclude that data minimization remains a practical challenge, and its dependence on technical and user context makes it difficult to implement a universal standard for data "necessity."