Moving Without Moving: Building a Brain-Controlled Wheelchair at Queen's
The fundamental premise of a brain-computer interface is deceptively simple: extract meaningful signals from the electrical noise of the brain, decode intent, and translate it into machine commands. The execution, however, requires navigating the complex topology of neural oscillations, signal processing pipelines, and real-time classification algorithms.
Our BCI-controlled wheelchair at Queen’s, developed through QBIT, used the Muse headband—a consumer-grade EEG device with four dry electrodes positioned at TP9, AF7, AF8, and TP10 according to the 10-20 system. While medical-grade EEG systems can have 64+ channels, the Muse’s limited electrode array was sufficient for detecting the specific neural signatures we needed: modulations in alpha and beta band oscillations.
The Logic of our Control Signals
EEG measures voltage fluctuations along the scalp resulting from synchronized postsynaptic potentials in pyramidal neurons. These signals oscillate at different frequencies, each associated with distinct cognitive states:
Alpha waves (8-13 Hz) dominate during wakeful relaxation with eyes closed. They’re strongest over occipital and parietal regions and typically decrease with attention and cognitive load—a phenomenon called alpha desynchronization or event-related desynchronization (ERD). We leveraged this inverse relationship: decreased alpha power indicated increased attention or active cognitive engagement.
Beta waves (13-30 Hz) are associated with active thinking, focus, and motor planning. Beta synchronization increases during concentration and decreases during motor imagery or execution. The sensorimotor cortex shows particularly strong beta rhythms, which made beta band power a reliable indicator of intentional cognitive effort.
Our control scheme exploited these oscillatory patterns:
Forward movement: Sustained elevation in beta power (concentration state). We set a threshold around 60-70% of the user’s maximum beta during calibration. Maintaining this required active focus—mentally rehearsing a task, doing mental arithmetic, or visualizing movement.
Turning: Asymmetric alpha suppression between hemispheres. Thinking about left-sided movement suppresses right hemisphere alpha; thinking right suppresses left hemisphere alpha. We calculated a laterality index: (α_left - α_right)/(α_left + α_right). Values exceeding ±0.15 triggered directional commands.
Stop command: Elevated alpha power (relaxation state) or jaw clench (which produces high-amplitude artifacts in frontal electrodes that are trivially easy to detect).
Signal Processing Pipeline
Raw EEG is essentially noise with signal buried inside. Our pipeline looked like this:
Preprocessing: Bandpass filter (1-50 Hz) to remove DC drift and high-frequency artifacts. Notch filter at 60 Hz for powerline interference. We experimented with Common Average Reference (CAR) to reduce volume conduction effects, though with only four electrodes, the benefit was marginal.
Feature Extraction: Fast Fourier Transform (FFT) on 1-second windows with 50% overlap to generate power spectral density estimates. We computed band power by integrating the PSD across alpha (8-13 Hz) and beta (13-30 Hz) ranges for each electrode.
Classification: Initially we used simple threshold-based classification—if beta power exceeded threshold X for Y consecutive windows, trigger forward movement. Later iterations implemented a linear discriminant analysis (LDA) classifier trained on labeled data from calibration sessions. The LDA approach improved accuracy from ~72% to ~85% by learning user-specific neural signatures.
Artifact Rejection: Eye blinks, muscle tension, and jaw movements create artifacts orders of magnitude larger than neural signals. We implemented automated artifact detection by flagging windows where signal amplitude exceeded 100 μV or where high-frequency power (30-50 Hz) suggested muscle contamination. This reduced false positives dramatically.
Control Smoothing: Direct brain-to-wheelchair mapping creates jerky, unstable movement because neural signals fluctuate rapidly. We implemented a Kalman filter to smooth velocity commands and added hysteresis—requiring signals to cross different thresholds for state transitions depending on current state. This prevented oscillation between commands.
The Calibration Problem
BCIs are not plug-and-play. Every brain is different—anatomical variations, skull thickness, scalp conductivity, baseline neural activity. Each user required 10-15 minutes of calibration where we recorded their neural signals during:
Rest (eyes open)
Relaxation (eyes closed)
Concentration (mental arithmetic)
Motor imagery (imagining left/right hand movements)
We used this data to establish personalized thresholds and train classifiers. Some users had strong, clean signals and could achieve control within 20 minutes. Others had weak alpha rhythms or high muscle tension, requiring multiple sessions to develop stable control strategies.
The real learning curve wasn’t teaching the computer to read brains—it was teaching brains to produce readable signals. Users had to develop their own mental strategies: some visualized squeezing a ball, others did countdown sequences, others focused on a visual stimulus. This neurofeedback aspect—real-time visualization of their brain states—was crucial. Users learned through operant conditioning which mental strategies produced desired neural patterns.
Technical Limitations and Challenges
The Muse, while accessible, has significant constraints:
Spatial resolution: Four electrodes provide minimal spatial information. We couldn’t distinguish between different motor imagery tasks beyond left/right asymmetry. Medical systems with high-density arrays can decode intended movements with much higher fidelity.
Signal quality: Dry electrodes have higher impedance than gel-based electrodes, reducing signal-to-noise ratio. Slight head movements caused baseline shifts. Hair, skin oils, and environmental humidity all affected signal quality.
Temporal precision: The Muse streams data at 256 Hz with Bluetooth latency of 40-100ms. For real-time control, this introduces noticeable lag between intent and action.
Mental load: Maintaining concentration states is cognitively exhausting. Users experienced mental fatigue after 15-20 minutes of active control. The system wasn’t “think and forget”—it required sustained cognitive effort.
The Broader Landscape: From Consumer EEG to Cortical Implants
Our project sits at the accessible end of a spectrum that extends to invasive, high-bandwidth neural interfaces.
Non-invasive BCIs like ours, or research systems using motor imagery paradigms (P300 spellers, steady-state visual evoked potentials), offer safety and ease of use but suffer from low information transfer rates—typically 10-25 bits per minute. That’s enough for basic control but nowhere near the bandwidth needed for complex tasks.
Electrocorticography (ECoG) places electrode arrays on the surface of the brain beneath the skull. Projects like BrainGate have demonstrated that paralyzed patients can control robotic arms and computer cursors with ECoG implants. The signal quality is dramatically better—less attenuation from skull and scalp, higher spatial resolution, access to higher frequency bands (gamma oscillations). Information transfer rates reach 100+ bits per minute.
Intracortical microelectrode arrays like those used by Neuralink penetrate into cortex, recording action potentials from individual neurons or small populations. This is the highest resolution you can get—single-cell activity from motor cortex that directly encodes movement intentions.
Neuralink’s N1 implant has 1,024 electrodes distributed across 64 threads, each thinner than a human hair. Their recent human trials demonstrated a tetraplegic patient controlling a computer cursor and playing video games through thought alone. The information transfer rate exceeded 8 bits per second during typing tasks—roughly comparable to using a joystick, and far beyond what’s possible with EEG.
The technical leap from Muse to Neuralink isn’t just resolution—it’s bandwidth, stability, and longevity. Neuralink’s electrodes record neural activity that directly correlates with motor intent, rather than inferring it from aggregate oscillatory patterns. Their machine learning pipeline uses neural networks to decode intended movements from population activity, learning the mapping between neural states and desired actions through extensive training data.
Synchron’s Stentrode takes a different approach—a stent-mounted electrode array inserted into blood vessels adjacent to motor cortex. It’s less invasive than arrays requiring craniotomy but still provides better signal quality than scalp EEG. Their patients have achieved similar computer control capabilities to BrainGate users.
Paradromics and Blackrock Neurotech are developing high-density neural recording systems designed for long-term clinical use, focusing on speech restoration for ALS patients and sensorimotor prosthetics.
The Engineering Philosophy
What strikes me about the BCI field is the inverse relationship between accessibility and capability. Consumer EEG is safe, cheap, and easy—but limited. Intracortical arrays are powerful and precise—but require neurosurgery, carry infection risks, and face material degradation over time (immune responses, glial scarring, electrode drift).
Our wheelchair project existed in the accessible-but-limited quadrant. We proved the concept with consumer hardware, but a real clinical device would need medical-grade sensors, robust artifact handling, and fail-safe mechanisms. The jump from research prototype to clinical deployment is vast.
Yet there’s value in the accessible approach. When I was building this at Queen’s, I was a student with limited neuroscience background, working with off-the-shelf components and open-source software. The democratization of BCI technology—even crude, low-bandwidth BCI—enables experimentation that wouldn’t happen in purely clinical contexts.
Future Trajectories
The field is moving in several directions simultaneously:
Decoding complexity: Recent work decodes imagined speech directly from neural activity. Motor BCIs are giving way to cognitive BCIs that might read intended words, not just intended movements.
Bidirectionality: Current BCIs are output-only. Future systems will provide sensory feedback—stimulating sensory cortex to create artificial touch sensations in prosthetic limbs. Closed-loop BCIs where the brain receives information about its own states enable more sophisticated control.
Adaptation and learning: Modern BCIs use adaptive algorithms that continuously adjust to changing neural signals. Co-adaptive systems where both user and machine learn simultaneously achieve better performance than static decoders.
Wireless and miniaturization: Next-generation implants will be fully wireless with on-board processing, eliminating transcutaneous connectors that pose infection risks.
The bigger question, though, isn’t just technical capability—it’s what happens when brain-reading becomes ubiquitous. Consumer EEG is already being marketed for meditation, focus training, and sleep tracking. As the technology improves, as the costs drop, as the signals get cleaner, we’re approaching a world where neural data becomes another quantified-self metric.


Wow, the part about using the Muse headband really makes me ponder the acessibility potential of BCI; your insights here are truely brilliant.