This paper addresses the limitations of existing research in assessing the value alignment of large-scale language models (LLMs) and proposes ValueActionLens, a novel evaluation framework that considers the "value-action gap." Leveraging a dataset of 14,800 value-based actions across 12 cultures and 11 social topics, ValueActionLens assesses the alignment between LLMs' stated values and value-based actions using three metrics. Experimental results demonstrate that the alignment between LLMs' stated values and actions is suboptimal and varies significantly across contexts and models. Furthermore, we identify potential harms caused by value-action gaps and demonstrate the effectiveness of using inferential explanations to predict such gaps. In conclusion, we highlight the dangers of relying solely on stated values to predict LLM behavior and emphasize the importance of context-sensitive assessment of LLM values and value-action gaps.