This paper points out the lack of comprehensive evaluation of the influence of hyperparameter selection on the fine-tuning of large-scale language models (LLMs) for tabular understanding and their ability to generalize and generalize outside the domain. We evaluate existing tabular LLMs and find that their ability to generalize and understand outside the domain is significantly worse than the baseline models. We show that hyperparameters such as learning rate have a significant impact on both tabular-specific and general features, and unlike previous studies, we show that small learning rates and few training instances can improve tabular understanding while maintaining general features. Based on these results, we present a tabular LLM called TAMA, fine-tuned on LLaMA 3.1 8B Instruct, which achieves performance comparable to or better than GPT-3.5 and GPT-4 on tabular tasks while maintaining strong outside-domain generalization and general features. This demonstrates the potential of reducing data annotation costs and improving model development efficiency through careful hyperparameter selection. We open source our project and model in this paper.