Progress updates for the BCI wheelchair navigation project — translating brain signals into real-world movement.
Beginning data recording process: Using the OpenBCI GUI software, select system control panel → Cyton (live) → serial from dongle → rename using (firstname-type of data-date taken). Start with a 1-minute baseline with lights on, then eyes closed with lights off. After a 2s delay, participants open their eyes to view a slide for 30s, then close again for 10s. This is repeated for every color.
Data formatting was done through Excel: load from CSV, select file with dropdown to load into existing table, remove style, drag table up one row, and enter the file number starting at zero. This process is repeated for every one of the 9 files recorded per person.
First data collection session in a while — took some time to re-acclimate. Obidiah's data is missing one section (8 files instead of 9). Nathan had an extra file, so one was excluded to maintain the 9-file standard.
Zachary, Nathan, and Shreyas recorded data this session. Two issues emerged: every file in a session has approximately 8,000 fewer rows than the previous one regardless of recording length — suggesting data is being incorrectly added or removed. Additionally, Shreyas's combined data exceeded GitHub's file limit (over 1 million rows). The headset and program behavior need to be investigated.
Discovered that using markers removes the issue of stopping/starting the stream too frequently. New procedure established:
The 0-marker intervals between numbered sections serve as baseline, replacing the previous minute-long baseline recording. Data now uploaded as text CSV instead of Excel.
Thanksgiving break — team members are in different states. No data collected this week.
Data collected using the updated format established on 11/19/2025.
Obadiah developed an automated data collection system using an open-source library as an alternative to the OpenBCI software. Data was collected in 5-second intervals per color — producing a higher volume of samples with reduced recording length and eliminating human error. Initial testing on Obadiah to validate the model against color classification.
After the break and into the new year, the team identified a core limitation: recording EEG data based solely on color viewing doesn't yield enough signal diversity, as only one node covers the visual cortex region activated by color. Two new directions are being explored:
We are still attempting to collect as much data as possible in a regular schedule. We are still testing the imagining of colors which will hopefully produce accurate and stable results.
Imagining colors ended up not producing the results we wanted so we are trying to pivot to imagined muscle tensing as the main focus of downloading signals. Right now we are working to refactor the app which we record data on to be able to use this new form of collecting data. Hopefully, this will produce more accurate and stable results.