FineScope is a framework for deriving compact, domain-specific, optimized LLMs from large-scale pre-trained language models (LLMs) that maintain efficiency and robust performance. FineScope leverages the Sparse Autoencoder (SAE) framework, which generates interpretable feature representations, to extract domain-specific subsets from large datasets and apply structural pruning with domain-specific constraints. This ensures that the pruned model retains essential knowledge of the target domain. Furthermore, it leverages SAE-curated datasets to perform its own data distillation to recover key domain-specific information lost during the pruning process.