This paper addresses novel class discovery (NCD), a promising research topic in the open world learning field. Existing NCD methods cluster unknown novel classes in unlabeled datasets based on labeled data in the same domain, but their performance can be significantly degraded when novel classes are sampled from different distributions than the labeled data. In this paper, we explore the feasibility of NCD in a cross-domain setting with novel classes from different distributions than the labeled data, and establish it under the prerequisite of style information removal. To this end, we introduce a proprietary style removal module that extracts style information separately from base features to facilitate inference. This module can be easily integrated with other NCD methods and acts as a plugin to improve the performance of novel classes from different distributions than the labeled dataset. In addition, we recognize the impact of different backbones and pretraining strategies on the performance of NCD methods, thereby establishing a fair benchmark for future NCD research. We demonstrate the effectiveness of the proposed style removal strategy through extensive experiments on three common datasets.