This paper deals with the bandit optimization problem for nonconvex non-smooth functions. In each trial, the loss function consists of a linear function plus a small random perturbation chosen after observing the player's choices. The paper presents expected value and high-probability regret upper bounds for this problem. The results imply improved high-probability regret upper bounds for bandit linear optimization, a special case of no perturbation. It also presents lower bounds on expected value regret.