The PaLM 2 report had issues with token count, so we couldn't know the exact scaling. Based on Chinchilla, we could only guess that it was at least 175B & 3.7T, but although the token count is similar, the model turns out to be almost twice as large.