LLM-based code generation has the potential to revolutionize creative coding tasks, such as live coding, by allowing users to focus on structural motifs rather than syntactic details. When prompted by LLM, users can consider a variety of chord candidates to better realize their musical intent. However, code generation models struggle to present unique and diverse chord candidates without direct insight into the audio output of the chords. To better establish the relationship between chord candidates and the generated audio, we investigate the mapping topology between the chord and audio embedding spaces. While we find that chords and audio embeddings do not exhibit a simple linear relationship, we complement this with a constructed predictive model that demonstrates the ability to learn an embedding alignment map. Given a chord, we propose a model that predicts the output audio embedding and constructs a chord-audio embedding alignment map, targeting musically diverse outputs.