In this paper, we propose APTx Neuron, a novel unified neural computational unit that integrates nonlinear activation and linear transformation into a single trainable expression. APTx Neuron is derived from the APTx activation function, eliminating the need for separate activation layers and improving computational efficiency and architectural simplicity. The proposed neuron has the form $y = \sum_{i=1}^{n} ((\alpha_i + \tanh(\beta_i x_i)) \cdot \gamma_i x_i) + \delta$, where all parameters $\alpha_i$, $\beta_i$, $\gamma_i$, and $\delta$ are trainable. We validate the APTx Neuron-based architecture on the MNIST dataset, achieving a test accuracy of up to $96.69$ within 11 epochs using approximately 332K trainable parameters. These results highlight the superior expressive power and computational efficiency of APTx Neurons compared to conventional neurons, and suggest a new paradigm for integrated neuron design and architectures built on top of them.