In this paper, we present a novel methodology, Skill-based Latent Action Control (SLAC), to address the scalability challenges of reinforcement learning (RL) for the construction of home and industrial robots that require high-degree-of-freedom (DoF) system control. To address the safe exploration and high sample efficiency challenges of traditional real-world RL, and the simulation-to-real-world gap, SLAC pretrains a task-independent latent action space using a low-fidelity simulator. This latent action space is learned via an unsupervised skill discovery method designed to promote temporal abstraction, separation, and safety, and is then used as an action interface for a novel off-policy RL algorithm to autonomously learn downstream tasks through real-world interactions. Experimental results show that SLAC achieves state-of-the-art performance on a variety of bilateral manipulation tasks, and learns high-contact full-body tasks in less than an hour without any demonstration or manual action prior knowledge.