Audio‑Neural Interface for Enhancing Memory Encoding

This project develops a scalable audio‑neural interface that improves memory encoding by dynamically transforming incoming speech into a prosodic, rhythmically optimized format. A real‑time auditory pipeline integrates with neural feedback from ear‑EEG to personalize speech processing based on a listener’s cognitive state.

Key Objectives

Unlike conventional hearing aids or passive tools, this system forms a closed loop between speech processing and brain activity, directly influencing memory encoding. We expect measurable improvements in retention compared to standard audio delivery, aligning with UK priorities at the AI × neurotechnology intersection.

Open PDF Go to Q&A