This paper addresses a challenging interactive task learning scenario called “blind relocation,” where the agent must manipulate a rigid environment without knowing the key concepts needed to solve the task. For example, a user might ask the agent to “put two Granny Smith apples in a basket,” but the agent cannot accurately identify which object in the environment is a “Granny Smith” because it has not been previously exposed to such concepts. In this paper, we present SECURE, an interactive task learning policy designed to address such scenarios. A unique feature of SECURE is its ability to engage in semantic analysis when processing embodied conversations and making decisions. Through embodied conversations, the SECURE agent learns from the user’s embodied corrective feedback when making mistakes, and strategically engages in conversations to discover useful information about new concepts relevant to the task. This ability allows the SECURE agent to generalize its acquired knowledge to new tasks. We demonstrate that SECURE agents that solve relocations in a simulated Blocksworld environment and in a real apple manipulation environment without awareness are more data-efficient than agents that do not engage in conversation or semantic analysis.