This paper reviews recent research exploring the ethical and social implications of large-scale AI models making "moral" judgments. While previous research has primarily focused on the alignment with human judgment through various thought experiments or the collective fairness of AI judgment, this paper focuses on AI's most immediate and promising application: assisting or replacing frontline officials in determining the allocation of scarce social resources or benefit approvals. Drawing on a rich historical background of how societies determine prioritization mechanisms for allocating scarce resources, this paper uses real-world data on homeless service needs to examine how well LLM judgments align with human judgment and currently used vulnerability scoring systems (to maintain data confidentiality, only local, large-scale models are used). The analysis reveals significant inconsistencies in LLM prioritization decisions across multiple dimensions: across implementations, across LLMs, and between LLMs and vulnerability scoring systems. At the same time, LLMs demonstrate qualitative agreement with typical human judgment in two-way comparison tests. These results suggest that current-generation AI systems are simply not ready to be integrated into high-stakes societal decision-making.