This paper theoretically and empirically compares the representational quality of contrastive loss and triplet loss, widely used in deep metric learning. Focusing on intra- and inter-class variance and optimization behavior (e.g., greedy update), we conduct task-specific experiments on synthetic data and real-world datasets such as MNIST and CIFAR-10. We find that triplet loss maintains greater intra- and inter-class variance, supporting fine-grained distinctions. Contrastive loss, on the other hand, tends to compress intra-class embeddings, obscuring subtle semantic differences. Furthermore, by analyzing the loss-decay rate, activity ratio, and gradient norm, we demonstrate that contrastive loss induces many small initial updates, while triplet loss generates fewer but more robust updates that facilitate learning on challenging examples. Results from classification and retrieval tasks on the MNIST, CIFAR-10, CUB-200, and CARS196 datasets show that triplet loss outperforms.