This paper addresses the phenomenon that approximate second-order optimization methods have worse generalization performance than first-order methods. We analyze that existing second-order methods tend to converge to sharper minima than SGD from the perspective of loss landscape. Accordingly, we propose a novel second-order optimization method, Sassha, which explicitly reduces the sharpness of minima to improve generalization performance. Sassha stabilizes the computation of approximate Hessian matrix during the optimization process and designs a sharpness minimization technique by considering delayed Hessian update for efficiency. Through various deep learning experiments, we verify that Sassha shows superior generalization performance compared to other methods, and provide a comprehensive analysis including convergence, robustness, stability, efficiency, and cost.