This paper points out the tendency to ignore specific social contexts during the development and deployment of AI, and draws on Helen Nissenbaum's concept of contextual integrity to demonstrate how this neglect can lead to ethical problems. Specifically, it argues that efforts to promote responsible AI can paradoxically justify disregarding existing contextual norms, criticizing the phenomenon of AI ethics being treated as a new ethical domain. Instead, it advocates a more conservative approach that responsibly integrates AI within existing social contexts and normative structures, arguing that preserving existing ethics should be prioritized over innovation in AI ethics. This argument also applies to recently emerged foundation models.