In this paper, we propose Normalized Weight and Activation Guided Compression (NoWag), a unified framework for zero-shot shape-preserving compression algorithms to address the high computational and memory requirements that limit the deployment of large-scale language models (LLMs) in resource-constrained environments. NoWag compresses Llama-2 7B/13B/70B and Llama-3 8B/70B models using two shape-preserving compression methods: vector quantization (NoWag-VQ) and unstructured/semi-structured pruning (NoWag-P). Experimental results show that NoWag-VQ significantly outperforms state-of-the-art zero-shot vector quantization methods, and NoWag-P is competitive with state-of-the-art methods. This suggests commonalities between different compression paradigms and provides inspiration for future research. The source code is available on GitHub.