This paper presents a method to protect a flow-based machine learning (ML)-based network intrusion detection system (NIDS) from evasive adversarial attacks. We introduce the concept of feature perturbability and propose a novel perturbability score (PS) that quantifies the degree to which an adversary can manipulate NIDS features in the problem space. PS identifies features that are structurally resistant to evasive attacks due to domain-specific constraints and correlations in the semantics of network traffic fields. Therefore, attempts to manipulate these features are likely to compromise the malicious functionality of the attack, render the traffic unprocessable, or both. In this paper, we introduce and demonstrate the effectiveness of PS-based defense, PS-based feature selection, and PS-based feature masking. Experimental results on various ML-based NIDS models and public datasets show that discarding or masking highly tamperable features (high PS features) can significantly reduce the vulnerability to evasive adversarial attacks while maintaining robust detection performance. In conclusion, PS effectively identifies features among flow-based NIDS features that are vulnerable to problem space perturbations. This novel approach leverages problem space NIDS domain constraints as a lightweight, general-purpose defense mechanism against evasive adversarial attacks targeting flow-based ML-NIDS.