This paper addresses the challenge of balancing utility maximization with resource utilization, including external movement and internal computation, in the development of artificial intelligence. While this tradeoff has been studied in fully observable environments, our understanding of resource efficiency in partially observable environments remains limited. To address this challenge, we develop a variant of the POMDP framework that treats information gained through inference as a resource to be optimized alongside task performance and effort. By solving this problem in a setting described by linear-Gaussian dynamics, we uncover the fundamental principles of resource efficiency. We find a phase transition in inference from a Bayesian optimal approach to one that strategically leaves some uncertainty unresolved. This frugal behavior generates a structured set of effective strategies, facilitating adaptation to subsequent objectives and constraints overlooked in the original optimization process. We demonstrate the applicability of the framework and the generality of the derived principles using two nonlinear tasks. Overall, this research provides the foundation for a new type of rational computation that can be used by both brains and machines for effective and resource-efficient control under uncertainty.