This paper addresses the problem of privacy leakage in class unlearning evaluations by overlooking the fundamental class geometry, and presents a simple yet effective solution. Specifically, we propose a membership inference attack (MIA-NN) that detects unlearned samples by exploiting the probabilities assigned by the model to neighboring classes. Furthermore, we propose a novel fine-tuning objective, the Tilted ReWeighting (TRW) distribution, which mitigates privacy leakage by approximating the distribution of the remaining classes generated by the retrained model. Experimental results show that TRW outperforms existing unlearning methods.