This paper focuses on multi-bit spiking neural networks (SNNs) that pursue energy-efficient and high-accuracy AI. Existing multi-bit SNNs suffer from disproportionate performance improvements due to increased memory and computational demands as the number of bits increases. Based on insights into the differences in importance across layers, this paper proposes an adaptive bit allocation strategy for directly trained SNNs, allowing for fine-grained allocation of memory and computational resources to each layer. By parameterizing the temporal length and bit width of weights and spikes, enabling learning and control through gradients, we improve the efficiency and accuracy of SNNs. To address the challenges posed by varying bit widths and temporal lengths, we propose improved spiking neurons that handle various temporal lengths, enable gradient derivation for temporal lengths, and are better suited for spike quantization. Furthermore, we theoretically formalize the problem of step size mismatch in learnable bit widths and propose a step size update mechanism to mitigate the resulting serious quantization errors. Experiments on various datasets, including CIFAR, ImageNet, CIFAR-DVS, DVS-GESTURE, and SHD, demonstrate that the proposed method can improve accuracy while reducing overall memory and computational costs. Specifically, the proposed SEWResNet-34 achieves 2.69% higher accuracy and 4.16x lower bit budget than the state-of-the-art baseline model on ImageNet. The results of this research will be published in the future.