This paper addresses the importance of identifying common subgoal structures in planning and reinforcement learning for long-term goal achievement. In the traditional planning domain, we leverage subgoal structures, which can be expressed as feature-based rules (sketches), to decompose the problem into subproblems and demonstrate that the problem can be solved in polynomial time using a greedy sequence of IW(k) search. Existing sketch learning methods using feature pools and min-SAT solvers suffer from limitations in scalability and expressiveness. To address these limitations, we propose a deep reinforcement learning (DRL) method that finds general policies in a modified planning problem. In this modified planning problem, the successor states of state s are defined as states reachable from s through IW(k) search. Experimental evaluations across various domains demonstrate that while the proposed DRL method does not produce interpretable rule-based sketches, the resulting decomposition is clearly understandable.