This paper investigates the effectiveness of introducing adaptive margin (eMargin) in a contrastive learning framework for time-series data representation learning. We explore whether adding an adaptive margin, which is adjusted based on a predefined similarity threshold, to the existing InfoNCE loss function can improve the separation between similar but different time steps and enhance the performance of downstream tasks. We evaluate the clustering and classification performance on three benchmark datasets and find that achieving high scores on unsupervised clustering metrics does not necessarily mean that the learned embeddings are meaningful or effective for downstream tasks. Specifically, the method adding eMargin to InfoNCE outperforms state-of-the-art baseline models on unsupervised clustering metrics, but struggles to achieve competitive results on downstream classification tasks via linear search. The source code is publicly available.