This paper highlights the importance of recognizing the information flow and computations that compose data science and machine learning Python notebooks for evaluating, reusing, and adapting them to new tasks. Re-executing and examining notebooks is often impractical due to the difficulty of resolving data and software dependencies. While large-scale language models (LLMs) pretrained on large codebases have been shown to be effective in understanding code without execution, we observed that some realistic notebooks fail to be understood due to hallucinations and long contexts. To address these issues, we propose a notebook understanding task that generates a notebook's information flow graph and the corresponding cell execution dependency graph. We also demonstrate the effectiveness of a "pincer" strategy that utilizes limited syntactic analysis to facilitate complete notebook comprehension using LLMs. The Capture and Resolve Assisted Bounding Strategy (CRABS) uses shallow parsing and abstract syntax tree (AST) analysis to capture the correct interpretation of a notebook between lower and upper bound estimates of the set of inter-cell I/Os (information flow to and from cells via variables). It then resolves any remaining ambiguities using the LLM, using cell-wise zero-shot learning, to identify the actual data inputs and outputs for each cell. We evaluate and demonstrate the effectiveness of our approach using an annotated dataset consisting of 50 representative, high-vote Kaggle notebooks representing 3,454 actual cell inputs and outputs. The LLM analyzes the syntactic structure of these notebooks, correctly resolving 1,397 (98%) of the remaining 1,425 ambiguities. Across the 50 notebooks, CRABS achieves an average F1 score of 98% for identifying inter-cell information flow and an average F1 score of 99% for identifying excessive cell execution dependencies.