Next-Generation Multisensory Robots

Here we compare the capabilities of context-sensitive two-point neurons when used in permutation-invariant neural networks for reinforcement learning (RL) with machine learning algorithms (Y. Tang & D. Ha, NeurIPS, 2021) based on Transformers. We show that a context-sensitive two-point neuron–driven network, termed Cooperator, learns far more quickly than a Transformer with the same architecture and number of parameters. For example, see the CartPole and PyAnt video demos below and observe the difference.

Google Brain (Y. Tang, NIPS, 2021): Point Neurons-driven CartPole fails to learn in 10K iterations

1k iteration

5k iteration

10k iteration

Two-Point Neurons-driven CartPole learns within 5K iterations

1k iteration

5k iteration

10k iteration

Google Brain (Y. Tang, NIPS, 2021):

Point Neurons-driven PyAnt (1K iterations)

Cooperator

Two-Point Neurons-driven PyAnt (1K iterations)