AdaDexGrasp is a framework that efficiently learns skilled grasping techniques from limited human demonstrations and adaptively applies them based on user instructions. It learns multiple grasping techniques from a single human demonstration and selects the most appropriate technique using a vision-language model (VLM). To increase sample efficiency, it proposes a trajectory-following reward that guides reinforcement learning (RL) toward a state closer to human demonstrations. It also learns beyond a single demonstration through curriculum learning, which incrementally increases the number of object pose variations. Upon deployment, the VLM searches for appropriate techniques based on user instructions, connecting low-level learning techniques with high-level intent. Evaluations in simulations and real-world environments demonstrate that it significantly improves RL efficiency and enables the learning of human-like grasping strategies across a variety of object configurations. Zero-shot transfer of the learned policy to the real PSYONIC Ability Hand achieves a 90% success rate on objects, significantly outperforming the baseline.