This paper proposes Symphony, a distributed multi-agent system, to address the high deployment costs, inflexible communication topologies, and limited adaptability of existing centralized large-scale language model (LLM)-based agent frameworks. Symphony enables the coordination of lightweight LLMs on consumer-grade GPUs and introduces three key mechanisms: a distributed ledger for recording features, a beacon selection protocol for dynamic task allocation, and CoT-based weighted-outcome voting. This design creates a low-overhead coordination system that is privacy-preserving, scalable, and fault-tolerant. Experimentally, Symphony outperforms existing baselines on inference benchmarks, demonstrating significant accuracy gains and robust performance across a wide range of model capacities.