DSperse is a modular framework for distributed machine learning inference with strategic cryptographic verification. Operating within the emerging paradigm of distributed zero-knowledge machine learning, DSperse enables targeted verification of strategically selected subcomputations, avoiding the high cost and rigidity of full model circuitry. These verifiable segments, or "slices," can encompass part or all of the inference pipeline, and global consistency is maintained through auditing, replication, or economic incentives. This architecture supports a practical form of trust minimization, limiting zero-knowledge proofs to the components that provide the greatest value. We evaluate DSperse using multiple proof systems and report experimental results on memory usage, execution time, and circuit behavior in sliced and unsliced configurations. By allowing the proof boundaries to flexibly adapt to the logical structure of the model, DSperse supports a scalable and targeted verification strategy suited to diverse deployment requirements.