This paper addresses adversarial robustness in structured data, a field that has been underexplored compared to the visual and language domains. To this end, we propose a novel black-box, decision-based adversarial attack for tapped data. This attack efficiently explores both discrete and continuous feature spaces with minimal oracle access by combining gradient-free direction estimation and iterative boundary search. Extensive experiments demonstrate that the proposed method successfully compromises nearly the entire test set on a variety of models, from classical machine learning classifiers to large-scale language model (LLM)-based pipelines. Notably, the attack consistently achieves a success rate exceeding 90% with a small number of queries per instance. These results highlight the severe vulnerability of tapped models to adversarial perturbations and underscore the urgent need for more robust defenses in real-world decision-making systems.