In this study, we introduce Indirect In-Context Learning, a novel paradigm for generalized In-Context Learning (ICL). In Indirect ICL, we explore demo selection strategies tailored to two real-world scenarios: Mixture of Tasks and Noisy ICL. We systematically evaluate Influence Functions (IFs) as a selection tool for these settings, highlighting their potential to better capture the informativeness of examples within the demo pool. For the Mixture of Tasks setting, we extract demos from 28 diverse tasks, including MMLU, BigBench, StrategyQA, and CommonsenseQA. Combining BertScore-Recall (BSR) with the IF surrogate model further improves performance, achieving a mean absolute accuracy improvement of 0.37% and 1.45% in 3-shot and 5-shot settings, respectively, compared to the traditional ICL metric. In the Noisy ICL setting, we investigate scenarios where demos are mislabeled or subject to adversarial noise. Experimental results show that reweighting traditional ICL selectors (BSR and Cosine Similarity) using an IF-based selector improves accuracy by an average of 2.90% for Cosine Similarity and 2.94% for BSR on the noisy GLUE benchmark. Under adversarial subsetting, we demonstrate the utility of task-agnostic demo selection using IFs to mitigate backdoor attacks. Compared to task-aware methods, the attack success rate is reduced by 32.89%. In summary, we propose a robust framework for demo selection that generalizes beyond traditional ICL and provide valuable insights into the role of IFs in indirect ICL.