Train an ML to read electrical signals from a non- invasive BCI. This would allow a user to control a motorized wheel-chair with their thoughts. This project would allow highly disabled patients with little or no limb movement to control the wheelchair.
Develop a computer vision system that translates American Sign Language into text or speech, facilitating communication for those who are hard of hearing or deaf.
Utilize EEG signals to train a machine learning model that converts brain activity into text, potentially enabling non-verbal communication.
Through these cross-disciplinary projects, we aim to design, build, and test brain-computer interface (BCI) and computer vision technologies that empower individuals with disabilities. By enabling intuitive wheelchair control, translating American Sign Language to text or speech, and converting brain signals into text, we aspire to improve mobility, communication, and independence for those with physical and communicative challenges, ultimately advancing their quality of life and contributing to impactful healthcare solutions.