Background Image

Democracy of local processors

Transforming the cellular foundations of deep nets

Overview

cell

Although first observed more than a century ago by Golgi, C., in 1885 and Ramon y Cajal, S., in 1893, the functional role of dendrites in the brain remained mysterious, and therefore, traditionally disregarded in 20th-century conception of integrate-and-fire ‘point’ neurons—also known as ‘dendritic democracy’ (M. Hausser, Current Biology, 2001).

Recent neurobiological breakthroughs (Larkum, M. E., Nature, 1998; Larkum, M. E., Science, 2020) have shown that a single two-point neuron (TPN) can solve the exclusive-or (XOR) problem that is solvable only by multiple layers of conventional point neurons. Nevertheless, how TPNs perform their magic when considering large-scale complex neural nets has, until now, remained enigmatic.

cell

Going beyond dendritic democracy, our work addresses this long-standing issue by introducing a ‘democracy of local processors (DoLP)’. As opposed to the traditional assumption of feedforward information or receptive field (R) (outside world) being the driving force behind neural output, DoLP enforces local processors to overrule the typical dominance of R and awards more authority to the contextual information coming from the neighbouring neurons (inside world) (see context-sensitive TPN figure on the right side). These context-sensitive TPNs amplify and suppress the transmission of information when the context shows it to be relevant and irrelevant, respectively.

DoLP is cooperative (W.A. Phillips, The Cooperative Neuron, UOP, 2023) in that it seeks to maximize agreement between the active neurons, thus reducing the transmission of conflicting information. This mechanism has been shown to be far more effective and efficient than current forms of deep nets (including Transformers—the backbone of ChatGPT) (Adeel et al., 2023, 2022a, 2022b, 2020)), therefore it offers a step-change in transforming the cellular foundations of deep networks and neuromorphic computing paradigms - see spiking neural net, robot, and hearing aid demos below.

Check out the latest multi-scale perspective from the Human Brain Project (HBP); this work stands out as a notable highlight: HPB, 2023

Architecture

Fig A: Context-sensitive neocortical neuron whose apical dendrites are in layer 1 (L1) with cell body and basal dendrites in deeper layers. Input to apical dendrites in L1 amplifies the transmission of information extracted from feedforward input if it is coherent and relevant in that context.

cell

Fig B: Functional depiction of a local processor with two points of integration whose contextual integration zone receives proximal input from neighbouring processors, distal input from more distant parts of the network, and universal input from memory. Networks with these local processors can process complex large-scale real-world data effectively and efficiently because they amplify transmission of information that is needed in the current context, while suppressing transmission of that which is not needed.

Fig C: Simplified depiction of inputs and outputs of two context-sensitive cortical pyramidal cells that facilitate the segregation and recombination of multiple input streams. There is much neurobiological and psychophysical evidence that this context-sensitivity regulates information flow within the thalamocortical system

cell

Fig D: Two-processor circuit with a detailed flow of information. Individual context-sensitive cooperative processors cooperate moment-by-moment via local context and universal context (P, D, and U) to conditionally segregate the coherent and conflicting FF signals and then recombine only the coherent signals to extract synergistic signals. For auditory processor, R represents the sensory signal (e.g., noisy audio), P represents the noisy audio coming from the neighbouring cell of the same network or the prior output of the same cell, D represents signals coming from other parts of the current external input (e.g., visual stimuli), and U represents the brief memory broadcasted to other brain regions. U could be extended to include general information about the target domain acquired from the prior knowledge, emotions, and semantic knowledge. The asynchronous modulatory transfer function separates coherent from conflicting signals with the conditional probability of Y: Pr(Y=1|R=r, C=c)=p(T(r,c)), where p represents ReLU and T(r,c) is a continuous R2 function

Spiking Simulations

Going beyond standard backpropagation and building on burst-dependent synaptic plasticity (BDSP) (A Payeur, Nature Neuroscience, 2021), here we have integrated our context-sensitive two-point neuron (CS-TPN) model into biologically plausible two-point neuron (TPN)-driven BDSP. In our model, CS-TPN is divided in to two integration zones: Somatic Integration Zone (SIZ) and Apical Integration Zone (AIZ). At the AIZ, different contextual inputs, including but not limited to universal context (Cu), distal context (Cd), proximal context (Cp), and credit assignment (Ce) in terms of contextual voltages evolve based on their independent differential equations. The integrated context (C) is calculated using the evolved contextual voltages that eventually effect the SIZ by modulating the receptive current: amplification if C is high, suppression if C is low. In this scheme, information is encoded in neurons firing (single spike=blue, burst of spikes=red). CS-TPN evidently differentiates between irrelevant, relevant, and very relevant information i.e., no spike, single spike, and burst of spikes respectively, which improves learning. In the raster plots, it can be seen that CS-TPNs remain largely silent when information is less relevant and vocal (bursting) otherwise. It can also be observed that TPNs are firing more often than CS-TPNs.

cell
cell
cell
cell
Simulation (spiking XOR gate): (left) TPNs-driven BDSP (Payeur A. and Naud R. et al., Nature neuroscience, 2021). (middle) Proposed CS-TPNs-driven BDSP. (right) Spiking behaviour (blue: TPNs; red: CS-TPNs).

Next-Generation Multisensory Robots

Here we compare the capabilities of context-sensitive two-point neurons when used in permutation-invariant neural networks for reinforcement learning (RL) with machine learning algorithms (Y. Tang & D. Ha, NIPS, 2021) based on Transformers, such as ChatGPT. We show that a context-sensitive two-point neurons-driven network, termed Cooperator, learns far more quickly than Transformer with the same architecture and number of parameters. For example, see below the CartPole and PyAnt video demo and observe the difference.

cell
cell
cell

Google Brain (Y. Tang, NIPS, 2021): Point Neurons-driven CartPole fails to learn in 10K iterations

1k iteration

5k iteration

10k iteration

Two-Point Neurons-driven CartPole learns within 5K iterations

1k iteration

5k iteration

10k iteration

Google Brain (Y. Tang, NIPS, 2021):

Point Neurons-driven PyAnt

(1K iterations)

Two-Point Neurons-driven PyAnt

(1K iterations)

Next-Generation Multisensory Hearing Aids

cell Here we demonstrate that context-sensitive two-point neurons enable extremely energy-efficient multisensory speech processing. In this audio-visual hearing aid use case, the neurons use visuals and environmental information to clean speech in a noisy environment. The simulation below shows that a 50-layered deep neural net uses 1250-times fewer context-sensitive two-point neurons, at any time, during training than point neurons. This opens new cross-disciplinary avenues for future on-chip DNN training implementations and posits a radical shift in current neuromorphic computing paradigms.

Arrow Down

Listener

Noisy STFTs

Noisy STFT

Lip-movements

cell
model
MASKs
Arrow Down

Listener

Xilinx RFSoC Kit

cell

Overall neural activity in the network

cell

Neural activity during training

cell

Large Language Models

Coming Soon













Contact Us

ahsan.adeel@deepci.org