This paper studies algorithmic decision-making in which strategic individual behavior exists, where machine learning (ML) models make decisions about human behavior and people can strategically change their behavior to improve future data. Previous research on strategic learning has primarily focused on linear settings, where agents with linear labeling functions optimally respond to (noisy) linear decision policies. In contrast, this paper focuses on general nonlinear settings, where agents respond to decision policies based solely on "local information" about the policy. Furthermore, we simultaneously consider decision maker welfare (model prediction accuracy), social welfare (agent improvement due to strategic behavior), and agent welfare (the extent to which ML underestimates the agent). First, we generalize the agent-optimal response model from previous research to nonlinear settings and then demonstrate the compatibility of welfare objectives. We show that the three welfare objectives can be simultaneously optimal only under limited conditions that are difficult to achieve in nonlinear settings. The theoretical results imply that existing research that maximizes only the welfare of a subset of parties inevitably diminishes the welfare of other parties. Therefore, we argue for the need to balance the welfare of each party in a nonlinear setting and propose an indeterminate optimization algorithm suitable for general strategic learning. We verify the effectiveness of the proposed algorithm through experiments on synthetic and real-world data.