This paper focuses on segmenting unstained live-cell images captured by bright-field microscopy. We highlight the inconsistent nature of existing cell segmentation methods in high-throughput bright-field live-cell imaging, highlighting challenges such as temporal phenotypic variation, low contrast, noise, and motion blurring due to cell movement. In this study, we develop a low-cost CNN-based pipeline by incorporating a comparative analysis of frozen encoders into a U-Net architecture, augmenting it with an attention mechanism, an instance recognition system, an adaptive loss function, hard instance retraining, a dynamic learning rate, an incremental mechanism for overfitting mitigation, and ensemble techniques. Validating the model on a public dataset containing diverse live-cell variants, we demonstrate competitive performance with state-of-the-art methods, achieving a test accuracy of 93% and an average F1 score of 89% (standard deviation 0.07) in low-contrast, noisy, and blurry images. Notably, despite being trained primarily on bright-field images (less than 20% of the images are phase-contrast microscopy), it generalizes effectively to the phase-contrast LIVECell dataset, demonstrating modality compatibility, robustness, and robust performance. It requires minimal computing power and can be adapted using basic deep learning setups such as Google Colab, making it highly practical. This pipeline outperforms existing methods in bright-field microscopy segmentation in terms of robustness and accuracy. The code and dataset are made publicly available for reproducibility.