Audio‑Neural Interface for Enhancing Memory Encoding
This project develops a scalable audio‑neural interface that improves memory encoding by dynamically transforming incoming speech into a prosodic, rhythmically optimized format. A real‑time auditory pipeline integrates with neural feedback from ear‑EEG to personalize speech processing based on a listener’s cognitive state.
Key Objectives
- AI‑driven real‑time prosody transformation: Convert neutral speech into rhythmically structured forms to increase encoding efficiency.
- Personalized neuroadaptive processing: Use ear‑EEG to optimize prosodic modulation based on real‑time cognitive state and personal auditory history.
- Scalable, non‑invasive interface: Earplug‑based neurotechnology designed for broad deployment without clinical procedures.
Unlike conventional hearing aids or passive tools, this system forms a closed loop between speech processing and brain activity, directly influencing memory encoding. We expect measurable improvements in retention compared to standard audio delivery, aligning with UK priorities at the AI × neurotechnology intersection.