This study evaluated natural language processing approaches for automated detection of underdiagnosed posttraumatic stress disorder (PTSD) in clinical settings. Using the DAIC-WOZ dataset, we compared general and mental health-specific Transformer models (BERT/RoBERTa), embedding-based methods (SentenceBERT/LLaMA), and large-scale language model prompting strategies (zero-shot/few-shot/thoughtchaining). The mental health-specific end-to-end model significantly outperformed the general model (Mental-RoBERTa AUPRC=0.675+/-0.084 vs. RoBERTa-base 0.599+/-0.145), with SentenceBERT embedding using neural networks achieving the highest overall performance (AUPRC=0.758+/-0.128). Few-shot prompting using DSM-5 criteria also showed competitive results (AUPRC=0.737) with only two examples. Performance varied significantly across symptom severity and depressive comorbidity status, with higher accuracy in patients with severe PTSD and those with depressive comorbidity. These results highlight the potential of domain-adaptive embedding and LLM for scalable screening, but also highlight the need for improved detection of subtle symptom manifestations and the development of clinically actionable AI tools for PTSD assessment.