This paper validates the claim that context-based learning (ICL) can learn new tasks. While ICL enables task solving by predicting the next token without additional training, this may be due to inference based on the model's prior knowledge and presented examples rather than explicit learning. The paper argues that while ICL can be mathematically considered learning, a complete characterization requires experimental research. Through a large-scale analysis, we analyze the effectiveness and limitations of ICL by removing or accounting for factors such as memorization, pretraining, distribution shift, prompt style, and phrasing. Our results show that while ICL is an effective learning paradigm, its ability to learn and generalize to new tasks is limited. As examples increase, accuracy is less affected by example distribution, model, prompt style, and linguistic features of the input. However, distribution sensitivity is observed, particularly for prompt styles like chain-of-thought, as it infers patterns from prompt regularities. The variation in accuracy across formally similar tasks suggests that the temporal encoding of autoregression is not a robust mechanism and that its generalization ability is limited.