Here we compare the capabilities of context-sensitive two-point neurons when used in permutation-invariant neural networks for reinforcement learning (RL) with machine learning algorithms (Y. Tang & D. Ha, NeurIPS, 2021) based on Transformers. We show that a context-sensitive two-point neuron–driven network, termed Cooperator, learns far more quickly than a Transformer with the same architecture and number of parameters. For example, see the CartPole and PyAnt video demos below and observe the difference.