EPSRC funded COG-MHEAR Programme Grant

£4 million programme grant awarded to develop a transformative, privacy-preserving multimodal hearing aid by 2050, that will seamlessly mimic the unique human cognitive ability to focus hearing on a single talker, effectively ignoring background distractor sounds regardless of their number and nature. Dr Ahsan Adeel is co-leading this prestigious EPSRC transformative healthcare technologies programme grant, along with Prof. Hussain (Programme Director).

EPSRC funded CogAvHearing

Towards visually-driven speech enhancement for cognitively-inspired multi-modal hearing-aid devices. £0.51million grant awarded (Prof. Amir Hussain, Lead Principal Investigator) by UK Government’s Engineering & Physical Sciences Research Council (EPSRC) under the “Disruptive Hearing-Aid Technologies Call. Developing world's first MM hearing aid (that can see) in collaboration with the CMI lab, Sheffield University, UK Medical Research Council (MRC), and Sonova Switzerland (leading hearing aid manufacturers). The listening device extracts speech from noise by using a camera to see what the speaker is saying, filtering out the competing sound. This ability is well beyond that of current audio-only HA technology and has the potential to improve the quality of life of the millions of people suffering from hearing loss.

Computational modelling of biological audio-visual processing in Alzheimer's and Parkinson's diseases using conscious multisensory integration

Sensory impairments have an enormous impact on our lives and are closely linked to cognitive functioning. Neurodegenerative processes in AD and PD affect the structure and functioning of neurons, resulting in altered neuronal activity. For example, patients with AD suffer from sensory impairment and lack the ability to channelize awareness. However, the cellular and neuronal circuit mechanisms underlying this disruption are elusive. Therefore, it is important to understand how multisensory integration changes in AD/PD, and why patients fail to guide their actions. This project aims to further extend the existing preliminary CMI research to understand how the roles of audio and visual cues change with respect to the outside world in patients with neurodegenerative diseases (e.g. AD/PD).

More natural human-like computing with enhanced situational awareness

In this research, we aim to further develop our understanding of cognition and its emergence over development and evolution to realize human-like computing. Our ongoing work involves the development of evolved neural models, inspired by human cognition to serve broader goals, and further informed by biological and psychological models of competence. These novel neural networks will be used to build accurate driver behavioral models (e.g. driverless cars) for precise maneuvering decisions in different situations (e.g. blind spot, car reversing etc.).

Spiking conscious multisensory integration driven low-power neuromorphic chips

This research work aims to develop energy efficient (low-power) neuromorphic chips and IoT sensors by exploiting the controlled firing property of the conscious multisensory integration/contextually-adaptive neuron (CAN). The CAN inherently leverages the complementary strengths of incoming multisensory signals with respect to the outside environment and anticipated behaviour.

Low power multisensory brain-computer interface for optimized and mindful decision making

This study focuses on how consciousness plays a role in multisensory integration (such as vision and sound) and helps humans to optimally interact with the world around them. In the light of this understanding, we are developing a novel low-power multisensory BCI for control applications such as operating a robotic arm, control of household appliances, and assistive devices etc.

Explainable artificial intelligence with advanced information decomposition models

Existing AI and deep learning systems exhibit impressive performance and effectuate tasks that are normally performed by humans. Yet, these end-to-end multimodal AI models operate at the network level and fail to justify reasoning with limited generalization and real-time analytics; thereby, restricting their application in areas where outcomes have an impact on humans. On the other hand, humans can extrapolate from a small number of examples, and are quick to learn and generalize lessons learned in one situation to instances that occur in different contexts. In this work, we are using CNN and advances in information decomposition to address the aforementioned problems and develop Explanabe AI algorithms.

Statistical Analysis Driven Optimized Deep Learning System for Intrusion Detection

Developing an innovative statistical analysis driven optimized deep learning system for intrusion detection. The proposed intrusion detection system (IDS) extracts optimized and more correlated features using big data visualization and statistical analysis methods, followed by a deep autoencoder (AE) for potential threat detection.

Deep Learning Driven Optimized Centralized Random Backoff (CRB) for Collision Resolution in Wi-Fi Networks

Existing Wi-Fi devices operate following the 802.11 standards to fairly use the channel that the devices share. However, the throughput performance of the existing Wi-Fi networks suffers from high packet loss and supports a limited number of nodes with a low data rate. This project aims to develop a novel centralized collision-free Wi-Fi routing algorithm to achieve more aggregate throughput (supporting more WiFi connections) as compared to existing state-of-the-art deterministic backoff mechanisms.