This paper studies agent learning and knowledge acquisition (LKA) for propositions that are either true or false, using a Bayesian approach. Agents receive data and update their beliefs about propositions based on a posterior distribution. LKA formulates data as active information, which modifies the agent's beliefs. It assumes that data provides detailed information about several features relevant to a proposition. This leads to a Gibbs distribution, which is the maximum entropy posterior distribution for the prior, subject to constraints imposed by the data on the features. It demonstrates that if the number of extracted features is too small, complete learning is impossible, and thus complete knowledge acquisition is impossible. Furthermore, it distinguishes between first-order learning (receiving data on features relevant to a proposition) and second-order learning (receiving data on the learning of other agents). It argues that this type of second-order learning does not represent true knowledge acquisition. The results of this study suggest that statistical learning algorithms have a Takeaways and that such algorithms do not always produce true knowledge. The theory is illustrated with several examples.