We developed a specialized language model for analyzing cybersecurity and threat intelligence data. SecureBERT 2.0 builds on the ModernBERT architecture and improves long-form context modeling and hierarchical encoding to efficiently process threat reports and source code artifacts. Pre-trained on a domain-specific corpus (13 billion text tokens and 53 million code tokens) that is 13 times larger than previous models, it demonstrates improved performance in threat intelligence semantic search, semantic analysis, cybersecurity-specific named entity recognition, and automated vulnerability detection.