This paper proposes a requirements-driven testing method (RBT4DNN) that leverages natural language requirements specifications to address the difficulty of formulating functional requirements for deep neural networks (DNNs). RBT4DNN defines a semantic feature space using a glossary and formalizes preconditions of functional requirements as logical combinations of these features. Using training data consistent with these feature combinations, it fine-tunes a generative model to reliably generate test inputs that satisfy the preconditions. These tests are then run on the trained DNN, comparing the outputs with the expected behavior of the requirement postconditions. RBT4DNN presents two use cases: detecting defects in DNNs and providing feedback on model generalization through requirements-driven exploration of model behavior during development. Evaluation results demonstrate that RBT4DNN-generated tests are realistic, diverse, and consistent with requirement preconditions, enabling targeted analysis of model behavior and effective defect detection.