We study leader reward manipulation in a repeated multi-objective Stackelberg game. Leaders can strategically influence followers' deterministic optimal responses, for example, by offering a portion of their own reward. Followers' utility functions (representing their preferences for multiple objectives) are assumed to be linear, though unknown, and their weighting parameters must be inferred through interactions. This presents the leader with a sequential decision-making task, requiring a balance between preference induction and immediate utility maximization. This study formalizes this problem and proposes a manipulation policy based on expected utility (EU) and long-term expected utility (longEU). This policy guides the leader's actions and incentive choices, allowing them to trade off short-term gains and long-term impacts. We demonstrate that longEU converges to an optimal manipulation under infinitely repeated interactions. Experimental results in a benchmark environment demonstrate that the proposed method enhances cumulative leader utility and promotes mutually beneficial outcomes, even without explicit negotiation or prior knowledge of follower utility functions.