To develop a Brain-Computer Music Interface (BCMI) to generate music based on a person's emotions, as determined by real-time EEG classification using an Emotiv kit.
There are 3 main parts of this project that can be developed in parallel. There may exist a number of technical difficulties involved in connecting multiple real-time data feeds to each other.
- EEG data feed: We need to develop a pipeline for acquiring a real-time EEG data feed that can be attached to a classifier. This may be the most difficult step conceptually, as there are few good libraries available for acquiring this type of data from the Emotiv EPOC.
- Emotion Classification: We need to extract robust EEG features and we need to train a classifier or regressor to determine the emotional content from EEG data. We will use publicly available EEG datasets for this task, and might want to incorporate some elements of semi-supervised learning (given the abundance of EEG data without labels).
- Music Generation: We need to generate music based on the identified emotions. This can be done using Erlich's algorithm, which has been translated to Python at this point in time. As a brief note, it might potentially be better to do away with Erlich's algorithm entirely in the future and to use a generative adversarial music generation algorithm instead; however, this will be a technically difficult challenge.
- Python 3.x. Anaconda installation highly recommended. Ability to run Jupyter Notebooks highly recommended.
- All libraries listed in requirements.txt. Run
pip install -r requirements.txtfrom command line to install dependencies. - VirtualMIDISynth. You can download this software from here.
- Sound fonts for VirtualMIDISynth. I used these soundfonts.
-
We intend on using the DEAP dataset for classifier training in the future.
-
Ehrlich, S. K., Agres, K. R., Guan, C., & Cheng, G. (2019). A closed-loop, music-based brain-computer interface for emotion mediation. PloS one, 14(3), e0213516. URL/DOI: https://doi.org/10.1371/journal.pone.0213516