This paper theoretically analyzes the energy efficiency of neural or neuromorphic optimizers. Neural optimizers utilize compute-in-memory (CIM) and learning-in-memory (LIM) paradigms to reduce the energy consumption associated with memory access and updates. We derive theoretical estimates of the energy-solution metric for an ideal neural optimizer, which adjusts the energy barrier of physical memory so that the memory update and consolidation dynamics align with the optimization or annealing dynamics. This analysis captures the non-equilibrium thermodynamics of learning, and the energy efficiency estimates are model-independent, depending only on model update operations (OPS), the number of parameters, the convergence rate, and the precision of the solution. Finally, we apply our analysis to estimate lower bounds on the energy-solution metric for large-scale AI tasks.