This paper studies the role of common sense knowledge in natural language inference (NLI). To address the lack of existing common sense knowledge resources, we explore the possibility of utilizing large-scale language models (LLMs) as common sense knowledge generators. We analyze the reliability of LLM's common sense knowledge generation and the impact of the generated knowledge on NLI prediction accuracy in two main aspects, and apply modified existing metrics to evaluate the factuality and consistency of LLM. Although explicitly incorporating common sense knowledge does not consistently improve the overall results, it is effective in distinguishing entailment relations and moderately improves distinguishing contradictory or neutral inferences.