In this paper, we present the ConTextTab model, which achieves state-of-the-art performance in context learning (ICL) for tabular data. Existing tabular ICL models are trained only with synthetic data, which fails to leverage the rich semantics and knowledge of real data, or are based on pre-trained large-scale language models, which limit the amount of context. ConTextTab addresses these issues by maintaining a structure suitable for the characteristics of tabular data, while providing embeddings specialized for various data types and training on large-scale real data. Experimental results demonstrate that ConTextTab demonstrates state-of-the-art performance on various benchmarks, and in particular, sets a new performance benchmark on the semantically rich CARTE benchmark. The source code and trained models are available on GitHub.