This study examines the impact of generative AI (GenAI) tools, such as OpenAI's ChatGPT, on traditional assessment practices. Specifically, as concerns about academic integrity and educational alignment arise as universities consider remote testing, we explore whether traditional closed-ended mathematics exams maintain their educational relevance in an unsupervised environment with GenAI access. To this end, we empirically generated, recorded, and blind-marked GenAI responses to eight undergraduate mathematics exams across the entire first-year curriculum at a Russell Group university. We evaluated GenAI's performance at both the module level and the overall first-year curriculum level, finding that GenAI's performance was comparable to that of a first-year degree.