This study investigated students' perceptions of generative AI feedback based on large-scale language models (LLMs) in a higher education setting, providing valuable insights for effective implementation and adoption. Through an experimental design involving 91 undergraduate students, we compared and analyzed students' trust in LLM-generated, human-generated, and human-AI co-generated feedback. Specifically, we examined factors influencing feedback type identification, perceptions of feedback quality, and potential biases related to the feedback source. Results showed that when the feedback source was hidden, students tended to prefer AI-generated and co-generated feedback over human feedback in terms of usefulness and objectivity. However, when the feedback source was revealed, a stronger negative bias toward AI emerged. Interestingly, the decreased authenticity when the feedback source was revealed was limited to AI feedback, while co-generated feedback maintained positive perceptions. Experience with educational AI improved students' ability to identify LLM-generated feedback and increased their trust in all types of feedback. Conversely, students with extensive experience using AI for general purposes tended to rate the feedback as less useful and trustworthy. These results suggest the importance of improving feedback literacy and AI literacy for the reliability of feedback sources and the adoption and educational impact of AI-based feedback.