In this paper, we propose a domain-independent, robust, and efficient generative model framework, LAGAMC, to address the challenges of manual classification of rapidly growing text data. Instead of treating existing labels as atomic symbols, we train a model to generate these descriptions based on input text using predefined label descriptions. In the inference process, a fine-tuned sentence transformer is used to match the generated descriptions with the predefined labels. We integrate a dual-objective loss function that combines cross-entropy loss and the cosine similarity of the generated sentences and the predefined target descriptions to ensure both semantic alignment and accuracy. LAGAMC is suitable for real-world applications due to its parameter efficiency and versatility on various datasets. Experimental results show that it outperforms the existing state-of-the-art models on all the evaluated datasets, achieving 13.94% performance improvement in Micro-F1 and 24.85% performance improvement in Macro-F1 compared to the closest baseline model.