The use of large-scale language models (LLMs) is transforming the peer review process, helping to create more detailed evaluations and even automating the generation of full reviews. This paper conducted a controlled experiment to investigate biases in LLM-generated peer reviews regarding sensitive metadata, such as author affiliations and gender. We found consistent affiliation biases favoring institutions ranked highly in standard academic rankings, as well as gender preferences. We observed that these implicit biases were even more pronounced in token-based soft ratings.