This paper aims to accelerate the extraction of structured information from unstructured data (e.g., free-text documents, scientific literature) to enhance scientific discovery and knowledge integration. While large-scale language models (LLMs) have demonstrated excellent performance on a variety of natural language processing tasks, they are less efficient in certain domains requiring specialized knowledge and nuanced understanding, and suffer from a lack of transferability across tasks and domains. To address these challenges, we present StructSense, a modular, task-independent, open-source framework that leverages domain-specific symbolic knowledge embedded in ontologies to more effectively explore complex domain content. StructSense integrates a feedback loop for iterative improvement via self-assessing judgers and a human intervention mechanism for quality assurance and validation. Through application to a neuroscience information extraction task, we demonstrate that StructSense overcomes two limitations: domain sensitivity and lack of cross-task generalization.