Humans are good at noticing emotions,
but is it possible to recognize 500+ emotions
in real time and make the right analysis?
DJ Neuromancer plays personlaized music depending on the setting (party,wedding) and changes song depending on the reception/emotion of the audience towards the song in real time. In gist, DJ Neuromancer will help play the right music at the right time and keep the audience engaged.
-
Video Input
The video stream processes frames from any camera feed. -
Getting Started
Age, gender and room setting determine the starting song, then the emotion either changes or keeps the genre. -
Emotion Driven
The features are sent to Spotify, where the song will change depending on reception to music. -
Intelligent Selection
Over time, we get smarter to learn song preferences from video feed to personalize even better.
hey , like Vikas suggested. Add the areas (files) you worked on so that if anyone has any trouble merging or running into an exception we can just contact that person directly.
I worked on almost all the files and I feel like I could help on any one with anything.
To just scope down. Here are the areas I am more confortable with:
- Spotify web api calls (oauth to some level)
- Any of the javascript files under the frontend/public/js folder
- app_server.py, feed.py, emotion.py
I worked on the entire backend folder and I can exactly pinpoint help in any of the files - analytics.py,app_server.py,azurevision.py,decisionmaker.py,emotion.py,webcamtoemotion.py,feed.py
I worked as lead on the front-end as well as the research cloud computing tools to meet our requirements.
- Spotify web api calls (main Lead)
- Entire frontend development (UI, UX and front-end development)
- Video access and processing
- Reading documentation on cloud computing platforms to find best solution