This paper presents an algorithm to automatically determine the optimal number of demos in In-Context Learning (ICL) for tabular data classification. Unlike the existing random selection algorithms, it considers the distribution of tabular data, the user-selected prompt template, and a specific Large Language Model (LLM). Based on the Spectral Graph Theory, we define a new metric to quantify the similarity between demos, construct a similarity graph, and analyze the eigenvalues of its Laplacian to derive the minimum number of demos that can represent the data in the internal representation space of the LLM. We verify the effectiveness of the proposed method through experiments on various datasets and LLMs.