This paper proposes an improved PGNNIV framework that applies reduced-dimensional modeling techniques to address the scalability challenges of physically guided neural networks with internal variables (PGNNIV). To address the scalability challenges of PGNNIV models when applied to high-dimensional data (e.g., fine-grid spatial fields or time-varying systems), we propose alternative decoder architectures using spectral decomposition, Proper Orthogonal Decomposition (POD), and pre-trained autoencoder-based mapping. These alternative decoders offer various trade-offs between computational efficiency, accuracy, noise tolerance, and generalization performance, significantly improving scalability. Furthermore, by incorporating model reuse through transfer learning and fine-tuning strategies, we support efficient adaptation to new materials or configurations and significantly reduce training time while maintaining or improving model performance. We validate the effectiveness of the proposed technique through a representative case study dominated by a nonlinear diffusion equation. We demonstrate that the improved PGNNIV framework successfully identifies the underlying constitutive state equation, maintains high prediction accuracy, improves robustness to noise, mitigates overfitting, and reduces computational requirements. The proposed technique can be adapted to various scenarios depending on data availability, resources, and specific modeling goals, and overcomes scalability issues in all scenarios.