DSperse is a modular framework for distributed machine learning inference with strategic cryptographic verification. Operating within the emerging paradigm of distributed zero-knowledge machine learning, DSperse enables goal-directed verification of strategically selected subcomputations, avoiding the high cost and rigidity of full model circuitry. These verifiable segments, or "slices," can encompass part or all of the inference pipeline, and global consistency is enforced through auditing, replication, or economic incentives. This architecture supports a practical form of trust minimization, limiting zero-knowledge proofs to the components that provide the greatest value. We evaluate DSperse using multiple proof systems and report experimental results on memory usage, runtime, and circuit behavior in both sliced and non-slice configurations. By allowing flexible alignment of proof boundaries with the logical structure of the model, DSperse supports a scalable, goal-directed verification strategy suitable for diverse deployment requirements.