This study investigated emotional consistency and semantic coherence by analyzing climate change-related conversations on social media (Twitter, Reddit) using large-scale language models (LLMs), Gemma and Llama. We examined how LLMs handled emotional content and maintained semantic relationships in continuation and response tasks, examining emotional transitions, intensity patterns, and semantic similarity between human-authored and LLM-generated content. Gemma tended to amplify negative emotions, especially anger, but preserved positive emotions, such as optimism. Llama better preserved emotions across a wider emotional spectrum. Both models generated responses with attenuated emotional intensity compared to human-authored content and showed a bias toward positive emotions in the response task. Both models maintained strong semantic similarity to the original text, but there was a difference in performance between the continuation and response tasks.