This paper proposes Flash Systolic Array (FSA), a novel systolic array-based architecture for efficient acceleration of Transformer models based on the FlashAttention algorithm. Existing systolic array-based accelerators suffer from low utilization and performance degradation due to the frequent interleaved execution of FlashAttention's matrix multiplication and softmax operations. FSA implements a novel scheduling algorithm called SystolicAttention to fully execute FlashAttention operations within a single systolic array. This allows for fine-grained overlap of matrix multiplication and softmax operations without the need for external vector units, significantly improving array utilization. Implemented as synthesizable RTL, FSA achieves 1.77x and 4.83x higher attention FLOPs/s utilization than AWS Neuron v2 and Google TPUv5e, respectively, with only a 12% area overhead.