ConTextTab is a context-aware learning (ICL) model for tabular data. Existing tabular-only ICL models are trained on synthetic data, limiting their ability to leverage the rich semantics and knowledge of real-world data. Pre-trained, large-scale language model-based ICL models suffer from context limitations. ConTextTab addresses these challenges by integrating semantic understanding and alignment into a tabular ICL framework. Using embeddings specialized for various data modes and trained on large-scale, real-world tabular data, ConTextTab achieves state-of-the-art performance across a variety of benchmarks, particularly setting a new benchmark on the semantically rich CARTE benchmark. Code and model checkpoints are available on GitHub.