Hearing Aids

Next-Generation Multisensory Hearing Aids

Here we demonstrate that context-sensitive two-point neurons enable extremely energy-efficient multisensory speech processing. In this audio-visual hearing-aid use case, the neurons use visual and environmental information to clean speech in a noisy environment. The simulation below shows that a 50-layer deep neural network uses 1,250-times fewer context-sensitive two-point neurons, at any time during training, than point neurons. This opens new cross-disciplinary avenues for future on-chip DNN training implementations and posits a radical shift in current neuromorphic computing paradigms.

Noisy STFTs

Lip-movements

10k iteration

Xilinx RFSoC Kit

Overall neural activity in the network

Neural activity during training