TUNE2KEY is an innovative platform that transforms audio, MIDI, and PDF files into beautifully rendered sheet music. (For the expert situation, achieved an onset F1 score of 96.72%, surpassing the previous state-of-the-art Onsets and Frames system, which scored 94.80%). Whether you're a beginner looking to simplify a piece or an expert aiming for a challenge, our AI-powered tool adjusts difficulty levels to match your needs. Perfect for musicians of all skill levels, TUNE2KEY makes music creation and customization effortless.
- Convert MP3 to Sheet Music
- Simplify MIDI files by converting chords to single notes and quantizing rhythms
- Adjust difficulty levels of music pieces
- Expert user-friendly interface
- Play MIDI files and convert them to MP3
(Developed on MacOS 15.1, Python 3.11 environment)
-
Clone the repository:
git clone https://github.com/yourusername/tune2key.git cd tune2key
-
Install the required dependencies:
chmod +x install.sh ./install.sh cd client npm install cd ..
-
Ensure MuseScore is installed and available in your system PATH:
mscore --version
If MuseScore is not installed, download it from the Download Link.
-
Start the Backend server:
cd server python app.py
-
Start the Frontend server: (Open a new terminal)
cd client npm start
-
Access the web application at
http://localhost:3000
.
- Navigate to the upload page.
- Upload an MP3, MIDI, or PDF file.
- The file will be processed, and you will receive the corresponding sheet music and audio files.
POST /upload
: Upload a file for transcription.GET /music_sheet/<name>
: Retrieve the generated sheet music PDF.GET /audio/<name>
: Retrieve the generated MP3 file.GET /progress/status/<name>
: Retrieve the progress of the transcription. ("status" can be "pending", "processing", or "completed")GET /download/<filename>
: Move the generated music file to client side.
This project is licensed under the MIT License. See the LICENSE file for details.
- Hawthorne, Curtis, et al. "High-resolution Piano Transcription with Pedals by Regressing Onset and Offset Times." arXiv preprint arXiv:2010.01815 (2020). Read the paper
- Roberts, Adam, et al. "A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music." arXiv preprint arXiv:1803.05428 (2018). Read the paper
- Huang, Cheng-Zhi Anna, et al. "Simple and Controllable Music Generation." arXiv preprint arXiv:2306.05284 (2023). Read the paper