White-Basilisk presents a novel approach for software vulnerability detection. Using an innovative architecture that integrates Mamba layers, a linear self-attention mechanism, and an expert-mixing framework, it achieves state-of-the-art vulnerability detection performance with only 200 million parameters. It overcomes the context limitations of existing large-scale language models (LLMs), can process very long code sequences in a single pass, and demonstrates robust performance even on imbalanced real-world datasets. This research not only sets a new standard in code security but also demonstrates that efficiently designed small models can outperform large models, potentially redefining optimization strategies in AI development for specific applications.