This paper emphasizes the importance of assessing the risk capabilities of AI models and ensuring transparency in their results, and proposes STREAM (A Standard for Transparently Reporting Evaluations in AI Model Reports), a standard for reporting AI model evaluation results focused on the ChemBio benchmark. Developed in consultation with 23 experts from government, civil society, academia, and cutting-edge AI companies, STREAM is a practical standard that helps AI developers clearly present evaluation results and provide sufficient detail to enable third parties to assess the rigor of ChemBio's evaluations. It exemplifies the proposed best practices through "gold standard" examples and provides a three-page report template to facilitate AI developers' implementation of the recommendations.