Here we compare the capabilities of context-sensitive two-point neurons when used in permutation-invariant neural networks for reinforcement learning (RL) with machine learning algorithms (Y. Tang & D. Ha, NIPS, 2021) based on Transformers. We show that a context-sensitive two-point neurons-driven network, termed Cooperator, learns far more quickly than Transformer with the same architecture and number of parameters. For example, see below the CartPole and PyAnt video demo and observe the difference.