CuVSLAM is a state-of-the-art visual simultaneous localization and mapping (VSLAM) solution that uses a variety of visual-inertial sensors, including multiple RGB and depth cameras and inertial measurement units. It operates with arbitrary geometric configurations, from a single RGB camera to up to 32 cameras, supporting a wide range of robotic setups. It is optimized using CUDA, enabling deployment in real-time applications with minimal computational overhead on edge computing devices such as NVIDIA Jetson. This paper presents the design and implementation of cuVSLAM, a use case, and experimental results demonstrating state-of-the-art performance on state-of-the-art benchmarks.