In this paper, we propose a novel framework to address the domain semantic bias problem in zero-shot cross-domain sequential recommendation (ZCDSR). Existing ZCDSR models have improved cross-domain knowledge transfer by utilizing large-scale language models (LLMs), but have limitations in accuracy due to semantic bias caused by vocabulary and content differences between domains. In this paper, we address this issue by improving cross-domain alignment at both the item level and the sequential level. At the item level, we introduce a generalization loss function for aligning cross-domain item embeddings to secure similarity between domains while maintaining the unique characteristics of items in each domain. At the sequential level, we develop a method to cluster source domain user sequences and transfer user behavior patterns through attention-based aggregation to dynamically adapt user embeddings when inferring target domains. As a result, we enable effective zero-shot recommendation without target domain interaction.