This paper reports the results of experiments with various strategies to improve code-mixed humor and sarcasm detection. We explored three approaches: (i) native language sample mixing, (ii) multi-task learning (MTL), and (iii) prompting and instruction fine-tuning of a large-scale multilingual language model (VMLM). Native language sample mixing involved adding monolingual task samples to the code-mixed training set, while MTL training involved using native language and code-mixed samples from a semantically related task (hate detection in this study). Finally, we evaluated the effectiveness of VMLM through contextual prompting and instruction fine-tuning, performed over a few trials. Experimental results showed that adding native language samples improved humor and sarcasm detection (up to 6.76% and 8.64% F1-score increases, respectively). Training the MLM within the MTL framework further improved humor and sarcasm detection (up to 10.67% and 12.35% F1-score increases, respectively). In contrast, VMLM's prompting and instruction fine-tuning did not outperform other approaches. Additionally, ablation studies and error analysis were used to identify areas where model improvements were needed, and the code was made public to ensure reproducibility.