This paper demonstrates that a minimal Transformer with fixed weights can emulate a wide range of algorithms through contextual prompting. Specifically, we show that in task-specific mode, a single-head softmax attention layer can reproduce a function of the form $f(w^\top x - y)$ with arbitrary precision, which includes many machine learning algorithms such as gradient descent and linear regression. Furthermore, in prompt-programmable mode, we demonstrate that a single fixed-weight two-layer softmax attention module can emulate any algorithm in a task-specific class using prompting alone. The core idea is to construct prompts that encode the algorithm's parameters in token representations, allowing softmax attention to follow the intended computation.