🧠 Master's Thesis · Pompeu Fabra University

The Point of No Return in Action Cancellation

When you decide to move, there's a critical moment—about 200ms before the action—when it becomes impossible to cancel. This research explores how that "point of no return" shapes your feeling of control.

~200ms Point of No Return
90%+ Detection Accuracy
Real-time EEG-based BCI

The Big Idea

Core Question: When does your brain consider an action "yours"? Is it when you physically move, or earlier—when you can no longer stop?

🧠 Sense of Agency (SoA)

The feeling "I did that"—the subjective experience of controlling your actions. It's what makes voluntary movement feel different from a reflex.

⏱️ Point of No Return

Research shows there's a critical moment ~200ms before you act when canceling becomes impossible. Your brain has committed, even if your finger hasn't moved yet.

🔗 The Connection

Does crossing this "point of no return" shape how much control you feel afterward? That's what this research investigates.

Voluntary action components: Intention and Agency
Voluntary action consists of two subjective experiences: Intention (prospective, pre-action) and Agency (retrospective, post-action). This research focuses on how the timing of outcomes affects the sense of agency.

Hypothesis

Main Prediction

Once you pass the "point of no return," your brain treats the action as already initiated—even before you physically move. This means if an outcome happens within that ~200ms window, you'll still feel strong agency. But if the outcome comes before that point, your sense of control drops.

Timeline showing hypothesis about point of no return
Experimental hypothesis timeline: The outcome (sound) can be triggered at different times after intention detection. When it occurs after the "point of no return" but before physical action, we predict participants will still report strong sense of agency.

🎯 Testing the Idea

Use a Brain-Computer Interface to detect when someone intends to act, then trigger an outcome at different times. Measure how much control they feel in each case.

💡 Why It Matters

Understanding this helps us design better BCIs, prosthetics, and human-machine systems where timing affects whether users feel in control—or feel the system is controlling them.

Approach

Two-Stage Design: Completed Train AI to detect intentions · Planned Test agency at different timings
Participant wearing EEG cap and ready to press button
Experimental setup: Participant wearing EEG cap, fixating on screen, with hand resting on button box. The box contains both the button and speaker to create natural action-outcome pairing.

Stage 1: Train the System ✅

Record EEG while people press a button whenever they want. Use machine learning to detect the "pre-movement" brain pattern that predicts an action is coming.

EEG Recording Machine Learning 90%+ Accuracy

Stage 2: Test Agency (Planned)

Use the trained system in real-time. When it detects intention, trigger a sound at varying times before the actual button press. Ask: "How much did you feel you caused that sound?"

Real-time Detection Variable Timing Agency Ratings

Stage 1: Data Collection & Training

Timeline of preparatory stage experiment
Preparatory stage protocol: Participants perform self-paced button presses across multiple trials. Each press triggers a synchronous sound. EEG data is collected to train subject-specific classifiers.

How the Classifier Works

Feature extraction process from EEG data
Feature extraction: EEG data is segmented into 1000ms windows, baseline corrected, and downsampled into 100ms averages to create feature vectors.
Feature concatenation across channels
Multi-channel integration: Features from all EEG channels are concatenated into a single feature vector fed to the classifier.

Stage 2: Real-Time Experiment (Planned)

Timeline of real-time experiment with variable outcome timing
Real-time experimental protocol: The trained classifier continuously monitors EEG activity. Upon detecting intention, the outcome (sound) is triggered at varying delays. After each trial, participants rate their sense of agency.

Technical Summary

  • Brain signals: EEG from motor areas of the brain
  • AI method: Regularized Linear Discriminant Analysis (RLDA)
  • Speed: 20ms updates (fast enough to catch the ~200ms window)
  • Personalization: Each person gets their own trained classifier

Results (Stage 1)

🎯 High Accuracy

Average: 90.7% correct detection

The system successfully learned to predict when someone was about to press a button, just from their brain activity—before they moved.

⚡ Fast Enough for Real-Time

Classifier updates every 20ms, making it feasible to detect intentions and trigger outcomes within the critical ~200ms window.

✅ Ready for Stage 2

5 participants trained successfully. System validated for real-time operation. Next step: test the agency hypothesis with variable outcome timing.

Classification accuracy across participants
Classification performance: All five participants achieved over 87% accuracy in distinguishing between 'Idle' and 'Pre-movement' states, with an average of 90.7% using leave-one-out cross-validation.

✅ Proof of Concept Successful

We can reliably detect intention before action using non-invasive EEG and machine learning. The system is ready to test how timing affects sense of agency.

Why This Matters

🤖 Better BCIs

Brain-computer interfaces that act too fast might make users feel out of control. Understanding agency timing helps design systems that feel natural.

🧠 Understanding Agency

Provides evidence that the "point of no return" isn't just about action cancellation—it may be the moment when your brain marks an action as "mine."

⚖️ Ethics & Responsibility

Who's responsible when a BCI acts on detected intentions? Understanding agency timing is crucial for legal and ethical frameworks.

Master's Thesis

Hamed Ghane · Supervised by Prof. Salvador Soto Faraco

MSc Brain and Cognition · Pompeu Fabra University · July 2023

📄 Read Complete Thesis